How Organizations Can Lead the Way in Trustworthy AI
Published 10/16/2025
Artificial intelligence is reshaping the world at a pace that few technologies have ever matched. From healthcare to customer support, AI systems now influence decisions with profound consequences. Yet alongside its promise, AI carries risks such as bias, hallucinations, privacy breaches, and a lack of transparency. These risks have created what experts call a trust gap between capability and confidence.
AI without trust is unsustainable. Organizations that cannot demonstrate responsible AI practices face mounting scrutiny from customers, regulators, and partners. Those that can will enjoy a decisive competitive advantage. This is the imperative behind CSA's AI Trustworthy Pledge.
What is the AI Trustworthy Pledge?
The AI Trustworthy Pledge is a public commitment organizations can make toward responsible AI development and operation. It represents an important step in CSA’s broader AI Safety Initiative. It also serves as a precursor to the forthcoming STAR for AI certification program.
By signing the pledge, organizations affirm their alignment with four foundational principles of trusted AI:
- Safe and Compliant Systems: You must develop and manage AI systems safely and in compliance with applicable laws.
- Transparency: AI systems should be explainable and open in how they function.
- Ethical Accountability: You must develop and govern AI fairly, with responsibility for outcomes clearly assigned.
- Privacy Protection: AI must uphold the highest standards of data privacy.
As Jim Reavis, CSA Co-Founder and CEO, said: "The decisions we make today around AI governance, ethics, and security will shape not only the future of our organizations and our industry, but of society at large. The AI Trustworthy Pledge provides a tangible opportunity to lead in this space, not just by managing risk, but by actively driving responsible innovation and helping to establish the industry standards of tomorrow."
AI Trustworthy Pledge participants receive a digital badge to share on their website, social channels, and customer communications. CSA also places their logo on the AI Trustworthy Pledge landing page.
Why Your Organization Should Take the Pledge
The AI Trustworthy Pledge is not just symbolic, but a powerful market signal. By displaying the digital badge and being listed on CSA’s official site, organizations visibly demonstrate their leadership in responsible AI.
Benefits include:
- Trust as a differentiator: Customers and partners increasingly select vendors that prove their AI systems are safe, transparent, and ethical.
- Momentum toward certification: The pledge serves as a precursor to the STAR for AI Level 1 self-assessment, launching in late 2025, which will formalize assurance through CSA’s AI Controls Matrix.
- Collective impact: As more organizations sign, trustworthy AI becomes the expectation, not the exception.
Early adopters of the pledge include Airia, Endor Labs, Deloitte, Okta, Reco, Redblock, Securiti AI, Whistic, and Zscaler.
How to Take the Pledge
Participation is designed to be straightforward and accessible. Organizations simply:
- Visit the CSA STAR for AI website.
- Indicate their interest via the form and await an email with further instructions.
- Confirm alignment with the four principles of trustworthy AI.
- Receive a digital badge and recognition on CSA’s site.
Notably, organizations do not need to be CSA corporate members or STAR Registry participants. The pledge is free and open to all.
Case Study: How Zendesk Applies the AI Trustworthy Pledge
Zendesk powers exceptional service for every person on the planet. As a leader in AI-powered service, Zendesk offers the Zendesk Resolution Platform, designed to redefine customer experience with advanced tools that integrate AI Agents, a comprehensive knowledge graph, actions and integrations, governance and control, measurement and insights, and human expertise, thus AI Trust is at the heart of its product development. Zendesk provides a compelling example of how the AI Trustworthy Pledge translates into practice:
- Governance in Action: Zendesk has implemented a comprehensive AI Management System aligned with the NIST AI Risk Management Framework and achieved ISO 42001 certification. Oversight is provided by an AI Governance Executive Committee, ensuring accountability and continuous alignment with responsible AI practices.
- Privacy & Compliance: Zendesk’s systems respect data ownership, limit exposure, and comply with GDPR, CCPA, HIPAA, and HDS. Privacy features such as entity detection and PII redaction protect sensitive data.
- Transparency & Explainability: Zendesk’s AI agents reveal their reasoning processes, while visible source content grounds generative replies.
- Ethical Principles: Guided by security, privacy, fairness, transparency, and explainability, Zendesk ensures its AI features reflect responsible innovation.
By embedding these principles, Zendesk both meets compliance standards and strengthens customer trust. Also contributing to the safety of the industry at large, their recent AI Trust Gap Report provides a practical path to scaling AI responsibly and effectively.
Further CSA AI Content to Explore
The pledge is just the beginning. CSA offers a growing ecosystem of research, tools, and training to help organizations succeed with AI responsibly:
- AI Safety Initiative: A premier coalition of trusted experts who converge to develop and deliver essential AI guidance and tools. The Initiative empowers organizations of all sizes to deploy AI solutions that are safe, responsible, and compliant.
- Whitepapers on AI Safety: Practical guidance for enterprise adoption. The latest releases cover identity and access management for agentic AI, secure agentic system design, AI organizational responsibilities, and more.
- AI Controls Matrix (AICM): A first-of-its-kind vendor-agnostic framework for cloud-based AI systems. Organizations can use the AICM to develop, implement, and operate AI technologies in a secure and responsible manner. AICM maps to leading standards, including ISO 42001, ISO 27001, NIST AI RMF 1.0, and BSI AIC4.
- Trusted AI Safety Expert Program: Coming in late October 2025, a rigorous, research-backed certificate program for professionals who build, manage, or audit intelligent systems. Built in partnership with Northeastern University, this professional credential will provide skills, structure, and recognition to lead responsibly in the era of AI.
Conclusion
Trustworthy AI is no longer optional. It is the foundation for sustainable, ethical, and competitive innovation. By taking the CSA AI Trustworthy Pledge, organizations signal their leadership, prepare for future certification, and join a global movement to make AI safe and reliable for all.
Now is the time to step forward. Take the pledge today and help build a future where AI is both powerful and trustworthy.
Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
What Is Model Context Protocol (MCP)?
Published: 10/15/2025
5 Reasons Disconnected Apps Are An Enterprise Risk You Can No Longer Ignore
Published: 10/15/2025
Beyond AI Principles: Building Practical Transparency for Cybersecurity
Published: 10/14/2025