Download Publication
Who it's for:
- AI model providers
- Orchestrated service providers
- Infrastructure operators
- Application developers
- AI customers
AI Controls Matrix
Release Date: 07/09/2025
Updated On: 10/30/2025
The AI Controls Matrix (AICM) is a first-of-its-kind vendor-agnostic framework for cloud-based AI systems. Organizations can use the AICM to develop, implement, and operate AI technologies in a secure and responsible manner. Developed by industry experts, the AICM builds on CSA’s Cloud Controls Matrix (CCM) and incorporates the latest AI security best practices.
The AICM contains 243 control objectives distributed across 18 security domains. It maps to leading standards, including ISO 42001, ISO 27001, NIST AI RMF 1.0, and BSI AIC4. The AICM is freely available to download (see 'Download the Resource' below).
What’s Included in this Download:
- AI Controls Matrix: A spreadsheet of 243 control objectives analyzed by five critical pillars, including Control Type, Control Applicability and Ownership, Architectural Relevance, LLM Lifecycle Relevance, and Threat Category.
- Mapping to the BSI AIC4 Catalog
- Mapping to NIST AI 600-1 (2024)
- Mapping to ISO 42001:2023
- Implementation Guidelines
- Auditing Guidelines
- Mapping to the AI EU Act
- Consensus Assessment Initiative Questionnaire for AI (AI-CAIQ): A set of questions that map to the AICM. These questions can guide organizations in performing a self-assessment or an evaluation of third-party vendors.
- Introductory Guidance to AICM: An introduction on how to use the AICM and the various additional resources available.
- Filling in the AI-CAIQ: Guidance on accurately completing the AI-CAIQ self-assessment, including ownership, evidence, and documentation rules.
- STAR for AI Level 1 Submission Guide: Step-by-step instructions for submitting an AI-CAIQ self-assessment to the STAR Registry.
Related Resources:
- Cloud Controls Matrix (CCM): A cybersecurity control framework for cloud computing. Both providers and customers can use the CCM as a tool for the systematic assessment of a cloud implementation.
- AI Trustworthy Pledge: A pledge that organizations can sign to signal commitment to developing and supporting trustworthy AI.
- STAR for AI Program: A CSA initiative to deliver an upcoming certification for organizations to demonstrate AI trustworthiness.
- Trusted AI Safety Knowledge Certification Program: An upcoming training and certificate program by CSA and Northeastern University. It aims to help professionals manage AI risks, apply safety controls, and lead responsible AI adoption.
Download this Resource
Interested in helping develop research with CSA?
Related Certificates & Training
.png)
Learn the core concepts, best practices and recommendation for securing an organization on the cloud regardless of the provider or platform. Covering all 14 domains from the CSA Security Guidance v4, recommendations from ENISA, and the Cloud Controls Matrix, you will come away understanding how to leverage information from CSA's vendor-neutral research to keep data secure on the cloud.
Learn more
Learn more

.jpeg)

.jpeg)
.jpeg)
