Everything You Need to Know About the EU AI Act
Published 03/27/2024
Originally published by BARR Advisory.
Written by Claire McKenna.
We’ve recently witnessed the rapid expansion of artificial intelligence (AI)—and we can expect its continued integration into our daily lives. As our use and reliance on AI grows, so do the potential security risks that come along with it. These risks have prompted several new standards to address the security concerns posed by AI, including the NIST AI Management Framework and ISO 42001.
The European Union (EU) is currently working on exacting the world’s first comprehensive legal framework governing the use of AI. While the final text of the European Union Artificial Intelligence Act (EU AI Act) has yet to be published, there are a few key takeaways from what we know so far. Let’s take a closer look:
What is the EU AI Act?
Late last year, the European Parliament reached a deal on landmark new rules governing the use of AI. As the world’s first major comprehensive legal framework on AI, the EU AI Act “aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field.”
It prohibits AI systems that pose “unacceptable risk” from being used in the EU and requires AI systems that pose “high risk” or “limited risk” to be subject to transparency requirements. These may include providing technical documentation, ensuring compliance with EU copyright law, and providing detailed summaries about how the system was trained.
How does it take a risk-based approach?
The Act will divide AI systems by their potential risks and level of impact. The higher the risk and impact level, the more requirements the system will be subjected to. Systems with little or limited risk will be subject to transparency requirements, such as informing a user that they are interacting with AI. Systems with high risk will have much stricter requirements, including mandatory impact assessments, data governance requirements, registration on an EU database, and more.
Examples of high risk AI include those involved in sensitive systems—such as employment, healthcare, and critical infrastructure.
Who does it apply to?
The EU AI act will apply to any providers or deployers of in-scope AI systems that are used in the EU. Just like how U.S.-based organizations must often comply with the General Data Protection Regulations when operating in Europe, organizations outside of the EU will have to comply with the EU AI Act if the AI system is used within the EU.
The Act defines AI as “a machine-based system that infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can affect physical or virtual environments.” It will not apply to AI systems that are exclusively used for military purposes, used exclusively for research, and those used by people for non-professional reasons .
When will it go into effect?
Since the Act is still in the final stages of the legislative process, an exact date of when it will go into effect is uncertain, but the Act is expected to be enacted into law by 2026.
What are the penalties for non-compliance?
Penalties for violating the EU Act will be calculated similarly to how they are under the GDPR—as a fixed amount or as a percent of global turnover (whichever is higher). Fines for non-compliance range from €7.5 million or 1.5% of global turnover to €35 million or 7% of global turnover, depending on the specific infringement.
What’s next for the EU AI Act?
The act is expected to go through the final stages of the legislative process and will be submitted to representatives of EU member states for approval, and the final text of the Act will be published. As the world’s first major AI legislation, the enactment of the Act is expected to have a global impact.
Related Articles:
CSA Community Spotlight: Nerding Out About Security with CISO Alexander Getsin
Published: 11/21/2024
Establishing an Always-Ready State with Continuous Controls Monitoring
Published: 11/21/2024
AI-Powered Cybersecurity: Safeguarding the Media Industry
Published: 11/20/2024
5 Big Cybersecurity Laws You Need to Know About Ahead of 2025
Published: 11/20/2024