ChaptersEventsBlog
Join Zero Trust founder John Kindervag as he reveals adoption insights in a September 16 webinar. Register now!

Download Publication

AI Controls Matrix
AI Controls Matrix
Who it's for:
  • AI model providers
  • Orchestrated service providers
  • Infrastructure operators
  • Application developers
  • AI customers

AI Controls Matrix

Release Date: 07/09/2025

Updated On: 09/09/2025

The AI Controls Matrix (AICM) is a first-of-its-kind vendor-agnostic framework for cloud-based AI systems. Organizations can use the AICM to develop, implement, and operate AI technologies in a secure and responsible manner. Developed by industry experts, the AICM builds on CSA’s Cloud Controls Matrix (CCM) and incorporates the latest AI security best practices.

The AICM contains 243 control objectives distributed across 18 security domains. It maps to leading standards, including ISO 42001, ISO 27001, NIST AI RMF 1.0, and BSI AIC4. The AICM is freely available to download (see 'Download the Resource' below).

What’s Included in this Download:
  • AI Controls Matrix: A spreadsheet of 243 control objectives analyzed by five critical pillars, including Control Type, Control Applicability and Ownership, Architectural Relevance, LLM Lifecycle Relevance, and Threat Category.
  • Consensus Assessment Initiative Questionnaire for AI (AI-CAIQ): A set of questions that map to the AICM. These questions can guide organizations in performing a self-assessment or an evaluation of third-party vendors.
  • Mapping to the BSI AIC4 Catalog
  • Mapping to NIST AI 600-1 (2024)
  • Mapping to ISO 42001:2023
Related Resources:
  • Cloud Controls Matrix (CCM): A cybersecurity control framework for cloud computing. Both providers and customers can use the CCM as a tool for the systematic assessment of a cloud implementation.
  • AI Trustworthy Pledge: A pledge that organizations can sign to signal commitment to developing and supporting trustworthy AI.
  • STAR for AI Program: A CSA initiative to deliver an upcoming certification for organizations to demonstrate AI trustworthiness.
  • Trusted AI Safety Knowledge Certification Program: An upcoming training and certificate program by CSA and Northeastern University. It aims to help professionals manage AI risks, apply safety controls, and lead responsible AI adoption.
Download this Resource

Bookmark
Share
Related resources
Agentic AI Identity and Access Management: A New Approach
Agentic AI Identity and Access Management: A Ne...
Secure Agentic System Design: A Trait-Based Approach
Secure Agentic System Design: A Trait-Based App...
Code of Practice for Assessment Firms Offering STAR
Code of Practice for Assessment Firms Offering ...
Fortifying the Agentic Web: A Unified Zero Trust Architecture Against Logic-Layer Threats
Fortifying the Agentic Web: A Unified Zero Trust Architecture Again...
Published: 09/12/2025
The Hidden Security Threats Lurking in Your Machine Learning Pipeline
The Hidden Security Threats Lurking in Your Machine Learning Pipeline
Published: 09/11/2025
From Policy to Prediction: The Role of Explainable AI in Zero Trust Cloud Security
From Policy to Prediction: The Role of Explainable AI in Zero Trust...
Published: 09/10/2025
API Security in the AI Era
API Security in the AI Era
Published: 09/09/2025
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Acknowledgements

Jan Gerst
Jan Gerst
Lead Cybersecurity Engineer SME, Charter

Jan Gerst

Lead Cybersecurity Engineer SME, Charter

MSMIT Cloud, MBA, MSMIT Cybersecurity
 
CSA CSP CCSK 
 
Cornell University - Technology Leadership | Business Management 
 
https://www.linkedin.com/in/jan-gerst-cybersecurity-professional

Read more

Ankit Sharma
Ankit Sharma
Security Officer, Compute BU, Cisco Systems India Pvt Ltd

Ankit Sharma

Security Officer, Compute BU, Cisco Systems India Pvt Ltd

Marina Bregkou
Marina Bregkou
Principal Research Analyst, Associate VP, CSA

Marina Bregkou

Principal Research Analyst, Associate VP, CSA

Ken Huang
Ken Huang
CEO & Chief AI Officer, DistributedApps.ai

Ken Huang

CEO & Chief AI Officer, DistributedApps.ai

Ken Huang is an acclaimed author of 8 books on AI and Web3. He is the Co-Chair of the AI Organizational Responsibility Working Group and AI Control Framework at the Cloud Security Alliance. Additionally, Huang serves as Chief AI Officer of DistributedApps.ai, which provides training and consulting services for Generative AI Security.

In addition, Huang contributed extensively to key initiatives in the space. He is a core contribut...

Read more

Are you a research volunteer? Request to have your profile displayed on the website here.

Interested in helping develop research with CSA?

Related Certificates & Training