The State of AI Security and Governance
Released: 12/17/2025
Organizations are rapidly moving from AI experimentation to operational deployment, yet their abilities to secure this transformation vary widely. Commissioned by Google, this report provides a data-driven look at how enterprises are adopting generative and agentic AI. It examines the risks they face, along with the governance structures that determine whether innovation advances responsibly.
Based on a global industry survey, this report shows that AI governance is the strongest predictor of AI readiness. Mature programs correlate to higher confidence, increased staff training, and more responsible innovation. It also highlights a meaningful shift: security teams have become early adopters of AI. They use it for threat detection, red teaming, automation, incident response, and more.
As enterprise AI adoption accelerates, organizations are employing a multi-model strategy dominated by GPT, Gemini, Claude, and LLaMA. Despite leadership enthusiasm, most organizations are still uncertain about their ability to secure AI systems. They cite persistent skills gaps and limited understanding of emerging AI-specific risks. Data exposure remains the top concern, even as threats like prompt injection and data poisoning continue to rise.
This report provides practical insights that can help you strengthen AI security, governance maturity, and future resilience.
Key Takeaways:
- AI governance is the “maturity multiplier” driving responsible adoption
- Security teams are leading early use of AI in cybersecurity workflows
- Multi-model strategies are growing, dominated by a small group of providers
- Skills gaps and limited risk understanding hinder secure AI deployment
- Data exposure is the top enterprise AI security concern
Download this Resource
Best For:
- CISOs and security leadership
- Security architects and engineers
- AI/ML and data science teams
- IT and cloud infrastructure leaders
- Risk, compliance, and governance professionals



