The EU AI Act and SMB Compliance
Published 12/18/2024
Originally published by Scrut Automation.
Written by Nicholas Muy, Chief Information Security Officer, Scrut Automation.
On July 12, 2024, the European Union (EU) Official Journal published the full text of the AI Act. This set into motion the final chapter of the most impactful security and privacy law since the General Data Protection Regulation (GDPR) came into force in 2018.
It will have enormous implications for how companies do business in the EU and globally.
Let’s examine the practical implications of this law for small and medium businesses (SMBs).
The law applies broadly
Definitions are important when it comes to new legislation. And the AI Act is broad in this respect. For example, it defines an “AI system” as a machine-based one “designed to operate with varying levels of autonomy and may exhibit adaptiveness after deployment.”
This can cover many software applications that many SMBs use, develop, and resell. While the law does make specific allowances for SMBs and startups, they are not exempt from its requirements.
The act also lays out a variety of roles related to AI systems, such as:
- Provider — anyone who develops an AI system directly or contracts someone else to do so and puts it on the EU market under its own name or trademark
- Deployer — anyone using an AI system (with an exception for personal use)
- Importer — anyone in the EU who puts an AI system on the market with the name or trademark of anyone from a third country
- Distributor — anyone in the supply chain, other than the provider or the importer, who makes an AI system available on the EU market
If there is any chance your company does any of these things with AI systems, you should keep reading.
Documentation requirements are piling up
A key task for any SMB dealing with EU AI Act requirements is determining whether an AI system for which they are responsible qualifies as a “high” risk.
If so, the act requires establishing:
- Risk and quality management systems: The focus here is to identify, analyze, estimate, and evaluate risks to health, safety, or fundamental rights. Companies also need to implement appropriate risk management measures.
- Implement a data governance program: Tracking the provenance and quality of training data can help to measure and manage biases and ensure representativeness.
- Detailed technical documentation: This is critical for facilitating the safe operation of AI systems and demonstrating compliance with the requirements. It should describe the design, development process, and performance of the AI system.
- Transparency: Those responsible for AI systems need to provide clear and accessible information on their capabilities and limitations. The goal is to ensure users understand the operation and output of the AI system, including foreseeable risks.
- Accuracy, robustness, and cybersecurity: In addition to safety considerations, high-risk systems must ensure consistent performance throughout the lifecycle and resilience against errors, faults, and adversarial attacks.
- Post-market monitoring: By gathering data on the AI system’s performance and compliance, risk management and quality systems can stay up to date.
- Human oversight: A final key requirement for high-risk AI systems is ensuring human operators can understand and appropriately respond to the AI system’s operations and outputs.
Even for systems that are not high-risk, the AI Act has additional requirements for systems that create lifelike content (which the Act refers to as “deepfakes”) and more powerful general-purpose AI models.
Liability risk expands
Another challenge for SMBs under the EU AI Act will be the increased risk of government and private legal action. The EU AI Act lays out a series of fines to penalize non-compliance.
SMBs can pay the lower of these two amounts, which can still be an enormous burden to a growing company.
Furthermore, additional EU regulation may make it easier for private parties to sue AI providers for product defects.
- Proposed changes to the Product Liability Directive (PLD) will create a presumption of defectiveness for AI products that do not comply with mandatory safety standards (including the EU AI Act). This will make it easier for private parties to win in court.
- The proposed AI Liability Directive (AILD) will make it easier to prove that even non-customers of AI products suffered harm, and thus entitle them to legal action.
ISO 42001 as a way to manage risk
Published at the end of 2023, ISO 42001 is a new compliance standard laying out best practices for building an AI Management System (AIMS). After being evaluated by an external auditor, companies can receive certification under the standard.
In addition to generally building customer trust and ensuring proper AI governance, ISO 42001 is also likely to be adopted as a “harmonized standard” under the EU AI Act. The biggest advantage here is that high-risk AI systems and general-purpose AI models will be presumed to be in conformity with much of the AI Act if they are also compliant with a harmonized standard (like ISO 42001).
While this is no guarantee, it goes a long way toward reducing risk. Other jurisdictions, like the State of Colorado in the United States, have taken similar steps by making ISO 42001 compliance a defense against some accused violations of the law.
Furthermore, implementing ISO 42001 is itself an effective way to manage risk. At a minimum, it requires:
- Laying out organizational roles and responsibilities when it comes to AI
- Monitoring for incidents and other non-conformities
- Conducting AI risk and impact assessments
It also includes an expansive set of optional controls in Annex A that facilitate:
- Responsible AI development goals, objectives, and procedures
- Using external input to improve AI system security and safety
- Effective data governance, classification, and labeling
Conclusion
The AI Act is the most consequential piece of AI legislation ever passed. And its impacts will be felt for decades. Whether or not you agree with the EU’s regulatory approach, it will come into force over the next two years.
SMBs with any exposure to the EU market should carefully examine their business to determine if they meet any of the definitions of covered organizations. Even if they don’t, the odds of similar legislation coming into effect in other jurisdictions are high, as Colorado has made clear.
Finally, certifying their AI Management System under ISO 42001 provides a legal defense in certain scenarios, reducing liability risk. At the same time, the preparation and auditing process itself will make the organization more resilient and responsible when using AI systems.
About the Author
Nicholas Muy is the Chief Information Security Officer at Scrut Automation, where he leads
cybersecurity, data privacy, and compliance. A vocal advocate within the security community for
building security programs that align to business objectives. Previously a security and strategy
leader at Expedia Group in security engineering and architecture and M&A. Prior to this, a cyber
policy strategist in the U.S. Department of Homeland Security. Passionate about security,
Nicholas is an active member within the security and technology community promoting
responsible innovation and building security.
Related Articles:
Managed Security Service Provider (MSSP): Everything You Need to Know
Published: 12/18/2024
Top Threat #7 - Data Disclosure Disasters and How to Dodge Them
Published: 12/16/2024
Test Time Compute
Published: 12/13/2024
Achieving Cyber Resilience with Managed Detection and Response
Published: 12/13/2024