Cloud 101CircleEventsBlog
Join CSA's Open Certification Framework WG! Help shape global, trusted cloud certification and the CSA STAR program. 

How Can ISO/IEC 42001 & NIST AI RMF Help Comply with the EU AI Act?

Published 01/29/2025

How Can ISO/IEC 42001 & NIST AI RMF Help Comply with the EU AI Act?

Contributed by Accedere.

Written by Ashwin Chaudhary, CEO, Controllo.ai.


The adoption of AI technologies has skyrocketed in the last few years. In 2019, 58% of organizations used AI for at least one business function; by 2024, that number jumped to 72%. The use of genAI nearly doubled from 2023 to 2024, going from just 33% to 65%.


What is the EU AI Act?

  • The European Union Artificial Intelligence Act (AI Act) Regulation (EU) 2024/1689 is a European Union regulation concerning artificial intelligence (AI).
  • It establishes a common regulatory and legal framework for AI within the European Union (EU). It came into force on 1st August 2024.
  • It covers all types of AI across various sectors, except for AI systems used solely for military, national security, research, and non-professional purposes.
  • It is designed to be applicable across various AI applications and contexts.
  • It’s a Law with specific compliance requirements.

The landmark EU AI Act strikes a careful balance between innovation and safety in artificial intelligence. By establishing clear guidelines for risk management, ongoing system monitoring, and human supervision, it creates a foundation for AI that people can trust.

The Act safeguards fundamental rights like privacy and ensures AI systems won't discriminate unfairly. Perhaps most importantly, it offers AI developers and companies a unified regulatory framework across Europe – providing the clarity they need to innovate confidently while protecting consumer interests.


Who will be affected?

  • Organizations that use AI and operate within the EU
  • as well as any organizations outside of the EU that are doing business (developing or using AI systems) in the EU


Who will be not be affected?

It's important to understand what falls outside the AI Act's scope. The regulation takes a hands-off approach to military and national security applications, whether developed by public or private organizations. Similarly, AI systems dedicated purely to scientific research and development enjoy exemption from these rules. This gives researchers the freedom to innovate without regulatory constraints. The key principle to remember is that the Act only kicks in when AI systems are deployed or commercialized – during development, these rules remain dormant.


EU AI Act fines

  • For non-compliance with the prohibited AI practices organizations can be fined up to EUR 35,000,000 or 7% of worldwide annual turnover, whichever is higher.
  • For non-compliance with the requirements for high-risk AI systems, organizations can be fined up to EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher.
  • The supply of incorrect, incomplete or misleading information to authorities can result in organizations being fined up to EUR 7,500,000 or 1% of worldwide annual turnover, whichever is higher.


Key players under the EU AI Act

key players under the EU AI Act


EU AI act risk-based approach

EU AI Act risk-based approach

1. Unacceptable risks:

AI systems that enable manipulation, exploitation, and social control practices are seen as posing an unacceptable risk.

Example: Deploying subliminal, manipulative, or deceptive techniques, exploiting vulnerabilities related to age, disability, or socio-economic circumstances, social scoring, real-time remote biometric identification (RBI), compiling facial recognition databases

AI Systems with unacceptable risks / prohibited AI Systems are not allowed to deploy in the EU Market.


2. High risks:

AI systems that negatively affect safety or fundamental rights will be considered high risk.

Example: Biometrics, Critical infrastructure, e.g., water supply, Law enforcement, Judicial and democratic processes.

Establish a risk management system, conduct data governance, and establish a quality management system to ensure compliance / High-risk AI systems can be deployed in the EU Market after mitigating the risks.


3. Limited risks:

Some AI systems intended to interact with natural persons or generate content would not necessarily qualify as high-risk AI systems but may entail risks of impersonation or deception. This includes the outputs of most generative AI systems.

Example: Chatbots and Deepfakes.

Subject to lighter transparency obligations Ensure that end-users are aware that they are interacting with AI/Limited Risks AI Systems can be deployed in the EU Market with Human Oversight control and Monitoring Activities.


4. Minimal risks:

The AI Act does not define this category. It includes AI systems not in other categories, like AI-enabled video games or spam filters.

Example: AI enabled video games and spam filters.

Unregulated in the EU Market/Minimal Risks AI Systems can be deployed in the EU Market.


framework functionsUnderstanding NIST AI Risk Management Framework (RMF) & ISO/IEC 42001

  • Designed to provide a structured approach.
  • To manage risks associated with AI technologies.
  • Assures responsible, Ethical, trustworthy, development, deployment and use of AI Systems

framework explanation

What is ISO/IEC 42001:2023?

  • ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations.
  • Designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.
  • Organizations of any size involved in developing, providing, or using AI-based products or services. It is applicable across all industries and relevant for public sector agencies as well as companies or non-profits.
  • Designed to be applicable across various AI applications and contexts.


Mapped requirements

mapped requirements



About the Author

Ashwin Chaudhary is the CEO of Controllo.ai, the AI Supercharged GRC platform for Automation of Compliance audits including ISO 42001 amongst other cyber security, cloud security, privacy, and other compliance mandates. He is a CPA from Colorado, MBA, CITP, CISA, CISM, CGEIT, CRISC, CISSP, CDPSE, CCSK, PMP, ISO27001 LA, ITILv3 certified cybersecurity professional with about 22+ years of cybersecurity/privacy and 40+ years of industry experience. He has managed many cybersecurity projects covering SOC 2 reporting, ISO audits, Privacy, Governance Risk, and Compliance, Pen testing & a 24x7 CSOC.

Share this content on your favorite social network today!