How to Assess and Treat AI Risks and Impacts with ISO/IEC 42001:2023
Published 10/30/2024
Originally published by Schellman.
ISO/IEC 42001:2023 is rapidly becoming the global standard for Artificial Intelligence (AI) governance. While it is a close cousin of ISO/IEC 27001:2022, ISO 42001—rather than focusing primarily on cyber and information security—takes a more holistic approach to risk management for AI systems.
At StackAware, they chose to implement ISO 42001 and subsequently performed the AI risk assessment, impact assessment, and risk treatments required to comply with the framework. In this blog post, we’ll detail how StackAware built their AIMS—including how they satisfied the critical AI risk requirements outlined in clauses 6.1.2-6.1.4—before we offer our perspective on their efforts so that you can leverage this insight when establishing your AIMS.
What are the ISO/IEC 42001:2023 Risk Requirements?
As its purpose is to assist organizations in ensuring that their AI systems are developed, deployed, and managed responsibly and securely, ISO 42001’s clauses 6.1.2-6.1.4 require organizations to perform three key things:
- AI risk assessment
- AI impact assessment
- AI risk treatment
1. AI Risk Assessment
A successfully certified AIMS must document and undergo a process that measures AI-related risks and their potential consequences to the organization, individuals, and society at large. This process must include an assessment of risk likelihood and impact, as well as a comparison against risk criteria and AI objectives.
When StackAware performed its ISO 42001 risk assessment, the results revealed the organization’s vulnerability to AI-related cybersecurity risks from:
- Prompt injection
- Unintended training
- Unanticipated data retention
It also highlighted the potential for broader implications of StackAware’s AI use, including:
- Political bias
- Model collapse
- 3rd party copyright infringement
2. AI Impact Assessment
ISO 42001 also requires a separate but related AI impact assessment that is focused on external entities—like groups of individuals and broader societies—rather than your organization and your AI-related objectives and use cases.
While the standard is less prescriptive in terms of how this impact assessment must be conducted, it can—and should—be used to inform your AI risk assessment.
Some issues StackAware identified as part of their AI impact analysis included:
- Legal, governmental, and public policy: StackAware is an OpenAI customer and uses the company’s Whisper application programming interface (API) for speech-to-text transcription. Some of the potential public policy impacts of that technology include:
- The increased public access to written information, especially related to public proceedings such as trials.
- The risk of malicious actors spreading misinformation if they successfully evade OpenAI’s safety layers. Specifically, scammers or even state-sponsored groups could potentially cause confusion during key moments by mimicking the voices of government officials.
- Environmental sustainability: StackAware is also an avid user of OpenAI’s generative pre-trained transformer (GPT)-4 model, and while information regarding GPT-4 is scarce, independent researchers suggest that training its predecessor, GPT-3, required the evaporation of 700,000 liters of clean freshwater, suggesting a major sustainability impact of continuous AI use.
- Economic disruption: Although AI in general has the potential to massively increase economic output over the coming decades, at the same time, it could also eliminate entire categories of occupations. This is something StackAware weighed against their extremely limited manpower and need to rapidly iterate.
3. AI Risk Treatment
Once the organizational risks and broader impacts are clear, ISO 42001’s requisite next step is to treat them appropriately.
When they did so, StackAware used the traditional four approaches—accept, avoid, transfer, and mitigate—to risk management. Some examples include:
- Accepted Risk: While OpenAI’s breach in early 2023 showed that cross-tenant data leakage is certainly a risk, StackAware decided that they didn’t have enough leverage to negotiate a single-tenant architecture with OpenAI. Because of the benefits of using the company’s products, StackAware accepted the possibility of it happening again.
- Avoided Risk: Because the company doesn’t train the underlying GPT-4 model on AI-generated material (or at all), StackAware avoided the risk of triggering a model collapse. With that said, there is some risk that OpenAI could do so on its own accord, with the same result, but StackAware accepted this marginal risk.
- Transferred Risk: Due to the uncertainty regarding the applicability of copyright law as it relates to generative AI, StackAware leveraged OpenAI’s indemnification provisions to transfer some litigation risk to them.
- Mitigated Risk: StackAware’s AI policy requires that employees and contractors opt out of training third-party AI models to the maximum extent possible. StackAware also leverages ChatGPT Team, which disables all training on user inputs. These measures reduce the risk of sensitive information being regurgitated to other OpenAI customers through unintended training.
To help StackAware avoid, transfer, and mitigate these risks, the company applied all ISO 42001 Annex A controls. They also further bolstered their cybersecurity posture as it relates to AI by maintaining a vulnerability disclosure policy (VDP) that solicits confidential notifications from ethical hackers about potential flaws in StackAware networks or AI systems.
ISO 42001 Risk Management from the Assessor’s Perspective
AI is not inherently ‘good’ or ‘evil’, ‘fair’ or ‘biased’, ‘ethical’ or ‘unethical’ although it can be or can seem so—like with most things, there are advantages and drawbacks to this advanced technology.
While AI can facilitate positive progress—like the automation of difficult or dangerous jobs, faster and more accurate analysis of large data sets, advances in healthcare, and more—there are also concerns about AI’s potentially negative effects, including harm due to unwanted bias, environmental damage, and unwanted reductions in workforce.
To reassure consumers that they can trust your systems in light of all these worries, it’s imperative that organizations providing, developing, and/or using AI in the delivery of their services have robust AI risk management processes in place to foster transparency and trustworthiness of systems using AI technologies.
StackAware’s experience in getting ISO 42001 certified proves that its framework would make for a great choice in this, as it lays out the requirements for performing risk assessments, risk mitigation (treatment), and system impact assessments on AI systems.
Here are some other, complementary standards you can reference for additional guidance when performing such AI risk management efforts such as:
- ISO/IEC 23894 (provides guidance on managing risks specific to AI)
- ISO/IEC 38507 (provides guidance the governance implications of the use of AI)
- ISO/IEC DIS 42005 (at the time of the writing of this article, is still in Draft form, but provides guidance performing AI system impact assessments)
Moving Forward with Your ISO 42001
As AI use proliferates across the globe, ISO/IEC 42001:2023 continues its emergence as a key governance framework that can help organizations effectively and appropriately measure and treat the related impacts and risks while reaping the benefits AI offers.
Understanding paragraphs 6.1.2-6.1.4 of the standard will be critical to your certification, and hopefully, this insight into StackAware’s experience satisfying those requirements and having them certified will aid in your build-out of your AIMS.
Related Articles:
Elevating Security Standards with AI Cloud Security Compliance Tools
Published: 10/28/2024
Democracy at Risk: How AI is Used to Manipulate Election Campaigns
Published: 10/28/2024
Are Companies Becoming More Transparent About Cyber Incidents?
Published: 10/28/2024