Capabilities-Based Risk Assessment (CBRA) for AI Systems
Establish a Risk Based Approach for Assessing Vendor AI Risk
Released: 11/12/2025
This publication introduces the Capabilities-Based Risk Assessment (CBRA), a structured, scalable approach to evaluating AI risk in enterprise environments. CSA’s AI Safety Initiative developed this framework to help assess risk based on what a given AI system can do.
CBRA evaluates AI through four core dimensions: System Criticality, Autonomy, Access Permissions, and Impact Radius. It uses these dimensions to calculate a composite risk profile. This enables organizations to align security controls with the true capabilities and potential consequences of each AI deployment.
Mapped directly to the AI Controls Matrix (AICM), CBRA helps enterprises apply proportional safeguards. Low-risk AI gets lightweight controls, medium-risk gets enhanced monitoring, and high-risk gets full-scale governance. The result is a consistent framework for risk-tiered oversight across industries.
As AI becomes more integrated into decision-making, CBRA equips organizations to manage risk at the speed of innovation. Use CBRA to ensure responsible use, regulatory alignment, and public trust.
Key Takeaways:
- A capability-driven model for AI risk assessment
- How the risk tiers align with the AICM
- How to implement scalable, risk-informed AI governance
- Applications for generative and agentic AI systems across sectors
Download this Resource
Best For:
- Chief Information Security Officers (CISOs)
- AI governance and compliance leaders
- Risk management and audit professionals
- Data protection officers
- AI product managers and solution architects



