ChaptersEventsBlog
Register now for NHIcon 2026, a half-day online event, to learn what the future of AI security requires.

Download Publication

The State of AI Security and Governance
The State of AI Security and Governance
Who it's for:
  • CISOs and security leadership
  • Security architects and engineers
  • AI/ML and data science teams
  • IT and cloud infrastructure leaders
  • Risk, compliance, and governance professionals

The State of AI Security and Governance

Release Date: 12/17/2025

Organizations are rapidly moving from AI experimentation to operational deployment, yet their abilities to secure this transformation vary widely. Commissioned by Google, this report provides a data-driven look at how enterprises are adopting generative and agentic AI. It examines the risks they face, along with the governance structures that determine whether innovation advances responsibly.

Based on a global industry survey, this report shows that AI governance is the strongest predictor of AI readiness. Mature programs correlate to higher confidence, increased staff training, and more responsible innovation. It also highlights a meaningful shift: security teams have become early adopters of AI. They use it for threat detection, red teaming, automation, incident response, and more.

As enterprise AI adoption accelerates, organizations are employing a multi-model strategy dominated by GPT, Gemini, Claude, and LLaMA. Despite leadership enthusiasm, most organizations are still uncertain about their ability to secure AI systems. They cite persistent skills gaps and limited understanding of emerging AI-specific risks. Data exposure remains the top concern, even as threats like prompt injection and data poisoning continue to rise.
This report provides practical insights that can help you strengthen AI security, governance maturity, and future resilience.

Key Takeaways:
  • AI governance is the “maturity multiplier” driving responsible adoption
  • Security teams are leading early use of AI in cybersecurity workflows
  • Multi-model strategies are growing, dominated by a small group of providers
  • Skills gaps and limited risk understanding hinder secure AI deployment
  • Data exposure is the top enterprise AI security concern
Download this Resource

Bookmark
Share
Related resources
Data Security within AI Environments
Data Security within AI Environments
Introductory Guidance to AICM
Introductory Guidance to AICM
Capabilities-Based Risk Assessment (CBRA) for AI Systems
Capabilities-Based Risk Assessment (CBRA) for A...
Agentic AI Security: New Dynamics, Trusted Foundations
Agentic AI Security: New Dynamics, Trusted Foundations
Published: 12/18/2025
AI Security Governance: Your Maturity Multiplier
AI Security Governance: Your Maturity Multiplier
Published: 12/18/2025
Deterministic AI vs. Generative AI: Why Precision Matters for Automated Security Fixes
Deterministic AI vs. Generative AI: Why Precision Matters for Autom...
Published: 12/17/2025
Enhancing the Agentic AI Security Scoping Matrix: A Multi-Dimensional Approach
Enhancing the Agentic AI Security Scoping Matrix: A Multi-Dimension...
Published: 12/16/2025
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Interested in helping develop research with CSA?

Related Certificates & Training