Download Publication

Who it's for:
- AI model providers
- Orchestrated service providers
- Infrastructure operators
- Application developers
- AI customers
AI Controls Matrix
Release Date: 07/09/2025
Updated On: 08/19/2025
The AI Controls Matrix (AICM) is a first-of-its-kind vendor-agnostic framework for cloud-based AI systems. Organizations can use the AICM to develop, implement, and operate AI technologies in a secure and responsible manner. Developed by industry experts, the AICM builds on CSA’s Cloud Controls Matrix (CCM) and incorporates the latest AI security best practices.
The AICM contains 243 control objectives distributed across 18 security domains. It maps to leading standards, including ISO 42001, ISO 27001, NIST AI RMF 1.0, and BSI AIC4. The AICM is freely available to download (see 'Download the Resource' below).
What’s Included in this Download:
- AI Controls Matrix: A spreadsheet of 243 control objectives analyzed by five critical pillars, including Control Type, Control Applicability and Ownership, Architectural Relevance, LLM Lifecycle Relevance, and Threat Category.
- Consensus Assessment Initiative Questionnaire for AI (AI-CAIQ): A set of questions that map to the AICM. These questions can guide organizations in performing a self-assessment or an evaluation of third-party vendors.
- Mapping to the BSI AIC4 Catalog
- Mapping to NIST AI 600-1 (2024)
- Mapping to ISO 42001:2023
Related Resources:
- Cloud Controls Matrix (CCM): A cybersecurity control framework for cloud computing. Both providers and customers can use the CCM as a tool for the systematic assessment of a cloud implementation.
- AI Trustworthy Pledge: A pledge that organizations can sign to signal commitment to developing and supporting trustworthy AI.
- STAR for AI Program: A CSA initiative to deliver an upcoming certification for organizations to demonstrate AI trustworthiness.
- Trusted AI Safety Knowledge Certification Program: An upcoming training and certificate program by CSA and Northeastern University. It aims to help professionals manage AI risks, apply safety controls, and lead responsible AI adoption.
Download this Resource
Acknowledgements

Jan Gerst
Lead Cybersecurity Engineer SME, Charter
Jan Gerst
Lead Cybersecurity Engineer SME, Charter
MSMIT Cloud, MBA, MSMIT Cybersecurity
CSA CSP CCSK
Cornell University - Technology Leadership | Business Management
https://www.linkedin.com/in/jan-gerst-cybersecurity-professional

Ankit Sharma
Security Officer, Compute BU, Cisco Systems India Pvt Ltd
Ankit Sharma
Security Officer, Compute BU, Cisco Systems India Pvt Ltd
Marina Bregkou
Principal Research Analyst, Associate VP
Marina Bregkou
Principal Research Analyst, Associate VP

Ken Huang
CEO & Chief AI Officer, DistributedApps.ai
Ken Huang
CEO & Chief AI Officer, DistributedApps.ai
Ken Huang is an acclaimed author of 8 books on AI and Web3. He is the Co-Chair of the AI Organizational Responsibility Working Group and AI Control Framework at the Cloud Security Alliance. Additionally, Huang serves as Chief AI Officer of DistributedApps.ai, which provides training and consulting services for Generative AI Security.
In addition, Huang contributed extensively to key initiatives in the space. He is a core contribut...
Are you a research volunteer? Request to have your profile displayed on the website here.
Interested in helping develop research with CSA?
Related Certificates & Training

Learn the core concepts, best practices and recommendation for securing an organization on the cloud regardless of the provider or platform. Covering all 14 domains from the CSA Security Guidance v4, recommendations from ENISA, and the Cloud Controls Matrix, you will come away understanding how to leverage information from CSA's vendor-neutral research to keep data secure on the cloud.
Learn more
Learn more