Cloud 101CircleEventsBlog
Register for CSA’s free Virtual Cloud Trust Summit to tackle enterprise challenges in cloud assurance.

What to Know About the New EU AI Act

What to Know About the New EU AI Act

Blog Article Published: 01/24/2024

Originally published by Schellman.

After 22 grueling hours of negotiations, policymakers within the European Union (EU) have reached a provisional agreement on new rules to govern the most powerful artificial intelligence (AI) models. They’re calling it the EU AI Act, and though yes—the provisions have been hashed out—disagreements surrounding the law enforcement of said provisions have led to a recess in the negotiations.

Though there’s still more to learn and dive into about this important milestone in AI regulation, there are some early takeaways to be had from the provisions. To help you understand the latest developments in the EU sector, we’re going to briefly explain 10 things you should know about the new EU AI Act.

What is the EU AI Act?

As artificial intelligence continues to advance, its integration further into society continues as well. But as our reliance on such technology increases—as with every new digital tool—security concerns have increased as well, prompting action from different governing bodies to ensure the safeguarding of AI.

Brand new standards have emerged, like the NIST AI Risk Management Framework, while others like ISO 42001 are still on the way, and existing ones—like HITRUST—have adjusted their requirements to address AI.

AI has even been addressed by the current American federal government through President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, but the EU has now taken things a step further with its AI Act.

As the world’s first comprehensive legal framework on AI, the EU AI Act “aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field.”

5 Takeaways from the EU AI Act (December 2023)

To achieve these aims, the EU AI Act will establish rules for AI based on its potential risks and level of impact—here are key things we know right now about the EU AI Act.

1. Scope of Regulation:

As a starting point for applicability, the definition of AI in the EU AI Act does closely follow that of the OECD, (though not verbatim).

That being said, free and open-source software will generally be excluded from this regulation's scope unless it:

  • Poses a high risk;
  • Is involved in prohibited applications; or
  • Presents a risk of manipulation.

2. Governance:

To ensure adherence to the provisions by applicable systems, national competent authorities will oversee AI systems, and the European Artificial Intelligence Board will facilitate consistent application of the law.

Furthermore, a specific AI Office will be established within the European Commission to enforce foundation model provisions.

3. Foundation Models:

Speaking of which, these models will take a tiered approach that categorizes AI models as “systemic” if they were trained with computing power above a certain threshold—criteria for designation decisions by the AI Office include:

  • The number of business users; and
  • Model parameters.

4. Transparency Obligations:

The transparency requirements of this new regulation will apply to all AI models, and there will be an obligation to publish a sufficiently detailed summary of training data. While trade secrets must be respected, AI-generated content must be immediately recognizable.

5. Stakeholder Engagement:

Along with the AI Office, an advisory forum will be formed to gather feedback from other stakeholders, including civil society.

Meanwhile, a scientific panel of independent experts will:

  • Advise on regulation enforcement;
  • Flag potential systemic risks; and
  • Inform AI model classification.

4 Items Still Under Ongoing Consideration Within the EU AI Act

While all that has been agreed to by this point, lawmakers still have the other, following considerations to finalize regarding the EU AI Act.

1. Prohibited Practices:

The AI Act already includes a list of banned AI applications, including:

  • Manipulative techniques;
  • Vulnerabilities exploitation;
  • Social scoring; and
  • indiscriminate scraping of facial images.

However, disagreements persist on the extent of the ban, as the European Parliament has proposed a broader list.

2. Application to Pre-Existing AI Systems:

Ongoing discussions will also address whether this regulation should apply to AI systems that were on the market before the Act's implementation—particularly if they undergo significant changes.

3. National Security Exemption:

Another point of contention is the national security exemption.

While some EU countries, led by France, have called for a broad exemption for AI systems used in military or defense, including those by external contractors, some are against such blanket loopholes, and instead argue that any national security exception from the AI Act should be assessed on a case-by-case basis—in line with both existing EU law and the EU Charter of Fundamental Rights—and so discussions regarding this issue will continue.

4. Law Enforcement Exemptions:

Negotiations concerning law enforcement provisions are similarly ongoing, with debates regarding:

  • Predictive policing,
  • Emotion recognition software, and
  • The use of Remote Biometric Identification (RBI).

What’s Next for the EU AI Act?

Though the EU AI Act already represents a significant step in regulating AI with its focus on mitigating potential risks while promoting transparency and accountability, the outcome of ongoing negotiations will determine the final provisions of this landmark legislation.

As it’s now in the final stage of the legislative process, with the EU Commission, Council, and Parliament engaged in said negotiations, we’ll have to wait and see where the lawmakers come down on the remaining items that need addressing.

Share this content on your favorite social network today!