Cloud 101CircleEventsBlog

ISO 42001: A New AI Management System for the Trustworthy Use of AI

ISO 42001: A New AI Management System for the Trustworthy Use of AI

Blog Article Published: 01/30/2024

Originally published by BARR Advisory on December 6, 2023.

Written by Kyle Cohlmia.

In a survey by Heidrick & Struggles, respondents most often identified Artificial Intelligence (AI) as a significant threat to organizations in the next five years. With this statistic in mind and the release of specific initiatives like the NIST AI Risk Management Framework and HITRUST AI Assurance Program, it’s no surprise that many security and compliance frameworks are looking for solutions that can enhance the trustworthiness of AI.

In 2024, ISO will also join the AI sphere by releasing ISO 42001, a new standard designed to help implement safeguards for the security, safety, privacy, fairness, transparency, and data quality of AI systems. ISO 42001 will include best practices for an AI management system—otherwise known as AIMS—and intends to help organizations that use AI responsibly perform their roles in using, developing, monitoring, or providing products or services.

So, what will the new standard mean for organizations that want to adhere to the new ISO AIMS? Let’s break down ISO 42001’s risk management features, unique safeguards, and structure of the upcoming framework.


ISO 42001 AIMS and Other ISO Management System Standards

As a new ISO management system standard (MSS), ISO 42001 will take a risk-based approach in applying the requirements for AI use. One of the most notable features to look forward to with ISO 42001 is that it’s been drafted in such a way as to integrate with other existing MSS, such as:

  • ISO 27001 for information security;
  • ISO 27701 for privacy; and,
  • ISO 9001 for quality

It’s important to note that ISO 42001 does not require organizations to implement or certify to other MSS as a prerequisite, nor is it the intent of ISO 42001 to replace other MSS. Instead, integrating ISO 42001 will help organizations who must meet the requirements of two or more of these standards. If your organization opts to adhere to ISO 42001, you’ll be expected to focus your application of the requirements on features unique to AI and the resulting issues and risks that arise with its use.

Since organizations should consider the management of issues and risks surrounding AI a comprehensive strategy, adopting an AIMS can enhance the effectiveness of an organization’s existing management systems in the areas of information security, privacy, and quality, as well as your overall compliance posture.


ISO 42001 AI Safeguards

As AI continues to evolve, the ISO 42001 framework can help organizations implement safeguards for certain AI features that could create additional risks within a particular process or system.

Examples of features that may require specific safeguards are:

  • Automatic Decision-Making: When done in a non-transparent way, automatic decision-making may require specific administration and oversight beyond traditional IT systems.
  • Data Analysis, Insight, and Machine Learning (ML): When employed in place of human-coded logic to design systems, these features change how such systems are developed, justified, and deployed in ways that may require different protections.
  • Continuous Learning: AI systems that perform continuous learning change their behavior during use and require special considerations to ensure their responsible use continues in their constantly changing behavior.


The ISO 42001 Structure

The structure of the upcoming ISO 42001 won’t look much different from the popular ISO 27001 framework. In fact, ISO 42001 will include similar features such as clauses 4-10, and an Annex A listing of controls that can help organizations meet objectives as they relate to the use of AI, and address the concerns identified during the risk assessment process related to the design and operation of AI systems.


ISO 42001 Subject Matter

Within the current draft of ISO 42001, the 39 Annex A controls touch on the following areas:

  • Policies related to AI
  • Internal organization
  • Resources for AI systems
  • Impact analysis of AI systems on individuals, groups, and society
  • AI system life cycle
  • Data for AI systems
  • Information for interested parties of AI systems
  • Use of AI systems
  • Third-party relationships


New ISO 42001 Annexes

ISO 42001 will also contain Annexes B, C, and D. See the following descriptions for more information on these new annexes.

  • Annex B: Annex B will be similar to the separate ISO 27002 standard for ISO 27001’s Annex A and provide the implementation guidance for the controls listed in Annex A.
  • Annex C: Annex C will outline the potential organizational objectives, risk sources, and descriptions that can be considered when managing risks related to the use of AI.
  • Annex D: Annex D will address using an AIMS across domains or sectors.


ISO 42001 Annex C Objectives and Risk Sources

The potential objectives and risk sources addressed in Annex C will include the following areas:

Objectives:

  • Fairness
  • Security
  • Safety
  • Privacy
  • Robustness
  • Transparency and explainability
  • Accountability
  • Availability
  • Maintainability
  • Availability and quality of training data
  • AI expertise

Risk Sources:

  • Level of automation
  • Lack of transparency and explainability
  • Complexity of IT environment
  • System life cycle issues
  • System hardware issues
  • Technology readiness
  • Risks related to ML

ISO 42001 will undoubtedly play a key role in the development of AI security. While the exact release date has yet to be announced, we should know more about when ISO 42001 will be published by the end of 2023.

Share this content on your favorite social network today!