CSA STAR for AI
Building AI trust upon STAR’s proven foundations.


Now is the Time for AI Security Assurance
AI is already embedded in our lives and businesses, with adoption accelerating across sectors. While initial frameworks and regulations exist, they remain largely unrefined and under-adopted. To meet this moment, CSA is extending its globally recognized STAR Program to AI.
STAR for AI provides a security controls framework, AI safety pledge, and certification program tailored for AI systems. It delivers a transparent, expert-driven, and consensus-based mechanism for organizations to assess, demonstrate, and ensure AI trustworthiness.
Get InvolvedWhat is STAR for AI?
Building on the world’s most complete cloud assurance program with over 3,400 assessments globally, CSA’s STAR Registry is expanding to include AI services. STAR for AI draws from the Cloud Controls Matrix to offer an authoritative foundation for measuring and communicating AI assurance.
Whether you’re an AI model, platform, orchestrator or application provider—or a cloud or SaaS provider—STAR for AI equips you with what you need to showcase alignment with recognized AI controls, build credible and transparent AI security programs, and validate the AI services you use or provide.
Why are we building STAR for AI? Simple: Within our community, what we hear is a request to support the controlled adoption of GenAI services and the governance of its cybersecurity and safety. We need a mechanism to measure trustworthiness, and CSA is committed to delivering a solution.
Daniele Cattedu
CTO, Cloud Security Alliance (CSA)

CTO, Cloud Security Alliance (CSA)
Latest from STAR for AI
The Key Components of STAR for AI
Explore ways to demonstrate your organization’s AI safety and security with CSA:
AI Trustworthy Pledge
The AI Trustworthy Pledge outlines a set of high-level AI safety principles that organizations can commit to. By signing the pledge, your organization signals its commitment to developing and supporting trustworthy AI. It also serves as a precursor to STAR for AI Level 1—referred to as STAR Self Assessment—launching later this year.
Join the growing list of organizations leading the way. Sign the pledge and get recognized.

AI Controls Matrix
Coming soon - The AI Controls Matrix (AICM) is a framework of control objectives designed to support the secure and responsible development, deployment, management, and use of AI technologies. It helps organizations evaluate risks within the AI service value chain and identify appropriate controls–particularly for Generative AI.

Trusted AI Safety Knowledge Certification Program
Coming soon -This research-backed certification program from CSA and Northeastern University will launch in late 2025, and will help professionals manage AI risks, apply safety controls, and lead responsible AI adoption.

Join the Global Push for AI Accountability
Interested in taking the AI Trustworthy Pledge? Submit your email to receive access to the pledge. Once completed, you'll receive your official digital badge and have your organization's logo featured on our website.