ChaptersEventsBlog

Understanding STAR for AI Level 2: A Practical Step Toward AI Security Compliance

Published 11/19/2025

Understanding STAR for AI Level 2: A Practical Step Toward AI Security Compliance
Written by Eleftherios Skoutaris, AVP of GRC Solutions, CSA EMEA.

The landscape of AI governance continues to evolve rapidly, presenting significant challenges for organizations trying to establish robust compliance frameworks. The Cloud Security Alliance (CSA) has introduced an initial version of the STAR for AI Level 2 designation, which leverages ISO/IEC 42001, to address the immediate need for structured AI security guidance while all industry participants learn more about managing the risks of AI, new assessment technologies are developed, and a consensus around AI governance is formed.

 

What is STAR for AI Level 2?

STAR for AI Level 2 is a designation launched by CSA on November 20, 2025. STAR for AI Level 2 recognizes organizations that demonstrate a commitment to AI security through three key components:

  • ISO/IEC 42001 certification: A validated AI management system framework from an accredited certification body.
  • AI-CAIQ v1.0.2 completion: The Consensus Assessment Initiative Questionnaire for AI, which operationalizes the AI Controls Matrix (AICM) with security-specific controls.
  • Valid-AI-ted scoring: A maturity assessment tool (available for AI-CAIQs starting on November 20, 2025) that evaluates the quality of AI-CAIQ implementations.

Organizations submit these components to the STAR Registry to receive the STAR for AI Level 2 designation.

 

The Business Case for Adoption

Organizations face increasing pressure from customers, regulators, and stakeholders to demonstrate responsible AI practices. CSA’s STAR for AI Level 2 designation addresses several critical business needs:

AI-Specific Security Controls: Traditional cloud security frameworks provide foundational security principles but were not designed to address AI-specific risks such as model poisoning, adversarial attacks, training data contamination, or the unique shared responsibility models in AI-as-a-Service deployments. STAR for AI Level 2 directly addresses these gaps.

Stakeholder Assurance: The designation provides verifiable evidence of an organization's AI security posture. When customers, partners, or auditors request information about AI governance practices, organizations can point to a structured, industry-recognized framework rather than developing ad hoc responses.

Operational Clarity: The AI-CAIQ assessment process requires organizations to document their AI security controls comprehensively, including implementation details, evidence, and shared responsibility boundaries. This documentation provides operational clarity that benefits both internal teams and external stakeholders.

 

Value for Early Adopters

Organizations that pursue STAR for AI Level 2 now position themselves advantageously for several reasons:

Preparation for Future Requirements: By engaging with the AI-CAIQ framework and Valid-AI-ted assessment now, organizations build the documentation, processes, and institutional knowledge needed for future enhancements to STAR for AI Level 2.

Market Differentiation: In competitive procurement processes and enterprise sales cycles, demonstrable AI security practices provide tangible differentiation. A STAR for AI Level 2 designation signals proactive governance rather than reactive compliance.

Iterative Improvement Framework: The program acknowledges practical implementation realities. Organizations can answer "YES" to controls that are implemented with minor gaps, provided they document those gaps with remediation plans and target dates. This approach encourages continuous improvement rather than demanding immediate perfection.

 

Preparation Steps

Organizations should take several concrete actions to prepare for STAR for AI Level 2 submission:

  1. Initiate ISO 42001 certification if not already in progress. This forms the foundation of the management system needed for the designation.
  2. Review the AI Controls Matrix and AI-CAIQ documentation. Familiarize relevant teams with the controls framework and assessment questions to identify potential gaps early.
  3. Compile supporting evidence. Gather policies, procedures, model documentation, risk assessments, Data Protection Impact Assessments (DPIAs), and other artifacts that demonstrate control implementation.
  4. Establish submission account infrastructure. Create accounts on the STAR submission system and prepare the required organizational information, including backup points of contact, regional designation, organization and service descriptions, and relevant URLs.
  5. Document shared responsibility boundaries. Clearly define where organizational responsibility ends and customer responsibility begins for each relevant control.

 

Strategic Implications

STAR for AI Level 2 reflects the industry's collective effort to establish practical, implementable standards for AI security. Organizations participating in this program contribute to the maturation of AI governance while simultaneously building their own capabilities.

As we enhance STAR for AI Level 2 to address the criticality of AI assurance, organizations that have already implemented STAR for AI and demonstrated compliance to ISO/IEC 42001 will have established processes, documentation, and experience.

 

Conclusion

STAR for AI Level 2 provides organizations with a structured, industry-recognized path to demonstrate AI security maturity. The initial version of the designation acknowledges that AI governance is evolving while providing immediate, actionable guidance for organizations committed to responsible AI practices.

For GRC leaders and executives, the decision point is clear: organizations can begin building AI security capabilities now with structured guidance, or defer until more stringent requirements emerge. The former approach provides strategic advantages in risk management, stakeholder assurance, and operational readiness.

The submission window opened on November 20, 2025. Organizations serious about AI security should evaluate their readiness and begin the implementation process.

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates