The AI Controls Matrix (AICM) is a first-of-its-kind vendor-agnostic framework for cloud-based AI systems. Organizations can use the AICM to develop, implement, and operate AI technologies in a secure and responsible manner. Developed by industry experts, the AICM builds on CSA’s Cloud Controls Matrix (CCM) and incorporates the latest AI security best practices.
The AICM contains 243 control objectives distributed across 18 security domains. It maps to leading standards, including ISO 42001, ISO 27001, NIST AI RMF 1.0, and BSI AIC4. The AICM is freely available to download (see 'Download the Resource' below).
What’s Included in this Download:
- AI Controls Matrix: A spreadsheet of 243 control objectives analyzed by five critical pillars, including Control Type, Control Applicability and Ownership, Architectural Relevance, LLM Lifecycle Relevance, and Threat Category.
- Mapping to the BSI AIC4 Catalog
- Mapping to NIST AI 600-1 (2024)
- Mapping to ISO 42001:2023
- Implementation Guidelines
- Auditing Guidelines
- Mapping to the AI EU Act
- Consensus Assessment Initiative Questionnaire for AI (AI-CAIQ): A set of questions that map to the AICM. These questions can guide organizations in performing a self-assessment or an evaluation of third-party vendors.
- Introductory Guidance to AICM: An introduction on how to use the AICM and the various additional resources available.
- Filling in the AI-CAIQ: Guidance on accurately completing the AI-CAIQ self-assessment, including ownership, evidence, and documentation rules.
- STAR for AI Level 1 Submission Guide: Step-by-step instructions for submitting an AI-CAIQ self-assessment to the STAR Registry.
You can also download unique versions of the AICM Implementation and Auditing Guidelines based on your organization’s role:
- Model Provider (MP): Develops, trains, and distributes foundational or fine-tuned AI models that create the underlying AI capabilities others build upon, operating at the foundation layer of the AI stack.
- Orchestrated Service Provider (OSP): Provides AI platforms and orchestration layers that integrate and govern models in enterprise environments, and is responsible for implementing controls to mitigate security, privacy, and compliance risks associated with LLM/genAI technologies.
- Application Provider (AP): Builds end-user AI applications that leverage models to deliver domain-specific functionality and user experiences, and is responsible and accountable for implementing controls within its own infrastructure and the services or products it develops and offers.
- AI Customer (AIC): Consumes AI services, platforms, or applications and is responsible for the design, development, implementation, and enforcement of controls to mitigate security, privacy, and compliance risks associated with LLM/genAI technologies within their organization.
- Cloud Service Provider (CSP): Delivers the underlying cloud infrastructure that hosts and supports AI systems and workloads, and is responsible for designing, developing, implementing, and enforcing controls to mitigate security, privacy, and compliance risks in the cloud services they provide.
Related Resources:
- Cloud Controls Matrix (CCM): A cybersecurity control framework for cloud computing. Both providers and customers can use the CCM as a tool for the systematic assessment of a cloud implementation.
- AI Trustworthy Pledge: A pledge that organizations can sign to signal commitment to developing and supporting trustworthy AI.
- STAR for AI Program: A CSA initiative to deliver an upcoming certification for organizations to demonstrate AI trustworthiness.
- Trusted AI Safety Knowledge Certification Program: An upcoming training and certificate program by CSA and Northeastern University. It aims to help professionals manage AI risks, apply safety controls, and lead responsible AI adoption.
Download this Resource
Best For:
- AI model providers
- Orchestrated service providers
- Infrastructure operators
- Application developers
- AI customers
Premier AI Safety Ambassadors

Premier AI Safety Ambassadors play a leading role in promoting AI safety within their organization, advocating for responsible AI practices and promoting pragmatic solutions to manage AI risks. Learn more about how your organization could participate and take a seat at the forefront of AI safety best practices.




