ChaptersEventsBlog
How is your organization adopting AI technologies? Take this short survey to help us identify key trends and risks across FSI →

Download Publication

Capabilities-Based Risk Assessment (CBRA) for AI Systems
Capabilities-Based Risk Assessment (CBRA) for AI Systems
Who it's for:
  • Chief Information Security Officers (CISOs)
  • AI governance and compliance leaders
  • Risk management and audit professionals
  • Data protection officers
  • AI product managers and solution architects

Capabilities-Based Risk Assessment (CBRA) for AI Systems

Release Date: 11/12/2025

This publication introduces the Capabilities-Based Risk Assessment (CBRA), a structured, scalable approach to evaluating AI risk in enterprise environments. CSA’s AI Safety Initiative developed this framework to help assess risk based on what a given AI system can do.

CBRA evaluates AI through four core dimensions: System Criticality, Autonomy, Access Permissions, and Impact Radius. It uses these dimensions to calculate a composite risk profile. This enables organizations to align security controls with the true capabilities and potential consequences of each AI deployment.

Mapped directly to the AI Controls Matrix (AICM), CBRA helps enterprises apply proportional safeguards. Low-risk AI gets lightweight controls, medium-risk gets enhanced monitoring, and high-risk gets full-scale governance. The result is a consistent framework for risk-tiered oversight across industries.

As AI becomes more integrated into decision-making, CBRA equips organizations to manage risk at the speed of innovation. Use CBRA to ensure responsible use, regulatory alignment, and public trust.

Key Takeaways:
  • A capability-driven model for AI risk assessment
  • How the risk tiers align with the AICM
  • How to implement scalable, risk-informed AI governance
  • Applications for generative and agentic AI systems across sectors
Download this Resource

Bookmark
Share
Related resources
The Continuous Audit Metrics Catalog
The Continuous Audit Metrics Catalog
 Cloud Controls Matrix and CAIQ v4.1
Cloud Controls Matrix and CAIQ v4.1
The State of Non-Human Identity and AI Security
The State of Non-Human Identity and AI Security
The Agentic Trust Framework: Zero Trust Governance for AI Agents
The Agentic Trust Framework: Zero Trust Governance for AI Agents
Published: 02/02/2026
Why SaaS and AI Security Will Look Very Different in 2026
Why SaaS and AI Security Will Look Very Different in 2026
Published: 01/29/2026
Leveling Up Autonomy in Agentic AI
Leveling Up Autonomy in Agentic AI
Published: 01/28/2026
AI Governance Framework Adoption in Cloud-Native AI Systems: Phased Approach and Considerations
AI Governance Framework Adoption in Cloud-Native AI Systems: Phased...
Published: 01/27/2026
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Interested in helping develop research with CSA?

Related Certificates & Training