Cloud 101CircleEventsBlog

How ISO 42001 “AIMS” to Promote Trustworthy AI

How ISO 42001 “AIMS” to Promote Trustworthy AI

Blog Article Published: 11/28/2023

Originally published by Schellman.

The regulation and responsible use of artificial intelligence (AI) has been a hot topic of 2023, prompting the release of NIST’s AI Risk Management Framework to help organizations secure this emerging tech. More standards are on the way that will address the need to implement safeguards addressing the security, safety, privacy, fairness, transparency, and data quality of AI systems throughout their life cycle—including ISO/IEC 42001.

ISO is already well-known among those interested and invested in cybersecurity, as it offers frameworks for the implementation of different management systems that can help you improve different aspects of your organization. Now—through the upcoming release of ISO 42001—ISO is getting into the AI game with what will eventually be best practices for an AI management system (AIMS).

Though we’re still not sure when this standard will be published—though we should have a clearer idea of whether it’ll be 2023 or 2024 by the end of the voting period in December—we do already know some details about ISO 42001 and what it will entail.

In this blog post, we’ll break these known details down into ISO 42001’s structure, objectives, and intent so that you have a better idea of what’s coming and whether this standard will suit your organization.


What is ISO 42001?

As a new AI management system standard (MSS), ISO 42001 is expected to ask organizations to take a risk-based approach in applying the requirements to AI use (as applying the AIMS more broadly to all use cases within an organization can harm other business objectives without realizing any tangible benefits or raising additional concerns).

The other, and perhaps most exciting, initial takeaway may be that, while ISO 42001 will be a certifiable, management system framework—the aforementioned AIMS—the standard has been drafted in such a way as to facilitate integration with other, existing MSS, such as:

  • ISO 27001 (information security)
  • ISO 27701 (privacy)
  • ISO 9001 (quality)

Since the issues and risks surrounding AI in those areas of security, privacy, and quality, among others, should not be managed separately for AI—but rather holistically—the adoption of an AIMS can enhance both the effectiveness of an organization’s existing management systems in those areas and your overall compliance posture.

That being said, it’s important to note that ISO 42001 does not require other MSS to be implemented / certified as a prerequisite, nor is it the intent of ISO 42001 to replace or supersede existing quality, safety, security, privacy, or other MSS.

Still, the potential for such integration will help organizations who need to meet the requirements of two or more such standards, though the focus of each implemented MSS must remain unique—e.g., information security with ISO 27001. Should you opt to adhere to ISO 42001, you’ll be expected to focus your application of the requirements on features that are unique to AI and the resulting issues and risks that arise with its use.


ISO 42001 Structure

What’s more, the structure of the eventual ISO 42001 will appear very familiar to those who’ve already been ISO 27001 certified, as ISO 42001 also features:

  • Clauses 4-10; and
  • An Annex A listing of controls that can help organizations* both:
    • Meet objectives as they relate to the use of AI; and
    • Address the concerns identified during the risk assessment process related to the design and operation of AI systems.

* These particular controls are not required to be used—rather, they’re meant to be a reference, and you are free to design and implement controls as needed.

Within the current draft of ISO 42001, the 39 Annex A controls* touch on the following areas:

  • Policies related to AI
  • Internal organization (e.g., roles and responsibilities, reporting of concerns)
  • Resources for AI systems (e.g., data, tooling, human)
  • Impact analysis of AI systems on individuals, groups, & society
  • AI system life cycle
  • Data for AI systems
  • Information for interested parties of AI systems
  • Use of AI systems (e.g., responsible / intended use, objectives)
  • Third-party relationships (e.g., suppliers, customers)

* The final number of controls and subject matter are both subject to change once the final standard is published.

ISO 42001 also contains an Annex B and Annex C:

Annex B

Annex C

Provides the implementation guidance for the controls listed in Annex A

(Think of this similar to the separate ISO 27002 standard for ISO 27001’s Annex A.)

Outlines:

  • The potential organizational objectives
  • Risk sources
  • Descriptions that can be considered when managing risks related to the use of AI.

ISO 42001 Objectives and Risk Sources

Those potential objectives and risk sources referenced in Annex C address the following areas:

Objectives

Risk Sources

  • Fairness
  • Security
  • Safety
  • Privacy
  • Robustness
  • Transparency and explainability
  • Accountability
  • Availability
  • Maintainability
  • Availability and quality of training data
  • AI expertise
  • Level of automation
  • Lack of transparency and explainability
  • Complexity of IT environment
  • System life cycle issues
  • System hardware issues
  • Technology readiness
  • Risks related to ML

And finally, ISO 42001 contains an Annex D that speaks to the use of an AIMS across domains or sectors.


The Intent of ISO 42001

Organizations meeting those objectives and mitigating those risk sources as outlined in the ISO 42001 framework will be helpful as AI use overall continues to expand—this tech is increasingly being applied across all sectors utilizing IT and trends demonstrate that it’s expected to be one of the main economic drivers over the coming years.

As such, the intent of ISO 42001 is to help organizations responsibly perform their roles in the use, development, monitoring, or provision of products or services that utilize AI so as to secure the technology.

Special focus through the ISO 42001 framework can help organizations implement the different safeguards that may be required by certain features of AI—features that raise additional risks within a particular process or system (in comparison to how the same task would traditionally be performed without the application and use of AI).

Examples of these “certain features” that would warrant specific safeguards are:

  • Automatic Decision-Making – When done in a non-transparent and non-explainable way, may require specific administration and oversight beyond that of traditional IT systems.
  • Data Analysis, Insight, and Machine Learning (ML) – When employed in place of human-coded logic to design systems, these change the way that such systems are developed, justified, and deployed in ways that may require different protections.
  • Continuous Learning – AI systems that perform continuous learning change their behavior during use and require special considerations to ensure their responsible use continues in their state of constantly changing behavior.


Available AI Cybersecurity Guidance / Regulation That Can Help

Organizations need to get started on securing their AI use as soon as possible, and while the first iteration of ISO 42001 could help—when it’s published—other important developments can be considered right now:

  • NIST AI Risk Management Framework (AI RMF 1.0): In January of this year, NIST released this new framework to better manage risks to individuals, organizations, and society associated with AI. For voluntary use, the NIST AI RMF can improve the incorporation of trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
  • Biden Executive Order (October 2023): This extensive order issued by President Biden builds on previous initiatives and provides comprehensive strategies to help harness the potential of AI, while at the same time managing its associated risks.
  • EU AI Act: At the time of this blog’s publication, the EU is also in the process of finalizing its own AI use regulation that is centered around excellence and trust and aims to boost research and industrial capacity while ensuring safety and fundamental rights.


What’s Next for ISO 42001

Even with all these major new milestones regarding AI, America appears to be firmly committed to moving further toward, if recent comments from Vice President Kamala Harris are any indication:

“History has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over the wellbeing of their customers, the security of our communities and the stability of our democracies…One important way to address these challenges – in addition to the work we have already done – is through legislation. Legislation that strengthens AI safety without stifling innovation.”

While not legislation per se, ISO 42001 will likely still represent the next major development in AI security, though it does—at this time—remain in Final Draft International Standard, or FDIS, status. As the related 8-week voting period ends December 1, 2023, at which point it’ll be approved for publication or not, by the end of 2023, we should have a better idea of when ISO 42001 will be published.

Share this content on your favorite social network today!