ChaptersEventsBlog
Share your organization’s adoption, governance, and security practices. Take the Securing the New Digital Workforce survey now →

Download Publication

Capabilities-Based Risk Assessment (CBRA) for AI Systems
Capabilities-Based Risk Assessment (CBRA) for AI Systems
Who it's for:
  • Chief Information Security Officers (CISOs)
  • AI governance and compliance leaders
  • Risk management and audit professionals
  • Data protection officers
  • AI product managers and solution architects

Capabilities-Based Risk Assessment (CBRA) for AI Systems

Release Date: 11/12/2025

This publication introduces the Capabilities-Based Risk Assessment (CBRA), a structured, scalable approach to evaluating AI risk in enterprise environments. CSA’s AI Safety Initiative developed this framework to help assess risk based on what a given AI system can do.

CBRA evaluates AI through four core dimensions: System Criticality, Autonomy, Access Permissions, and Impact Radius. It uses these dimensions to calculate a composite risk profile. This enables organizations to align security controls with the true capabilities and potential consequences of each AI deployment.

Mapped directly to the AI Controls Matrix (AICM), CBRA helps enterprises apply proportional safeguards. Low-risk AI gets lightweight controls, medium-risk gets enhanced monitoring, and high-risk gets full-scale governance. The result is a consistent framework for risk-tiered oversight across industries.

As AI becomes more integrated into decision-making, CBRA equips organizations to manage risk at the speed of innovation. Use CBRA to ensure responsible use, regulatory alignment, and public trust.

Key Takeaways:
  • A capability-driven model for AI risk assessment
  • How the risk tiers align with the AICM
  • How to implement scalable, risk-informed AI governance
  • Applications for generative and agentic AI systems across sectors
Download this Resource

Bookmark
Share
Related resources
Beyond the Hype: A Benchmark Study of AI Agents in the SOC
Beyond the Hype: A Benchmark Study of AI Agents...
Analyzing Log Data with AI Models to Meet Zero Trust Principles
Analyzing Log Data with AI Models to Meet Zero ...
Agentic AI Identity and Access Management: A New Approach
Agentic AI Identity and Access Management: A Ne...
From Chatbots to Agents: The Evolution Toward Agentic AI
From Chatbots to Agents: The Evolution Toward Agentic AI
Published: 11/13/2025
The Difference Between HITRUST and the National Institute of Standards and Technology (NIST)
The Difference Between HITRUST and the National Institute of Standa...
Published: 11/12/2025
How CISOs Can Strengthen AI Threat Prevention: A Strategic Checklist
How CISOs Can Strengthen AI Threat Prevention: A Strategic Checklist
Published: 11/12/2025
Introducing Cognitive Degradation Resilience (CDR): A Framework for Safeguarding Agentic AI Systems from Systemic Collapse
Introducing Cognitive Degradation Resilience (CDR): A Framework for...
Published: 11/10/2025
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual
Are you a research volunteer? Request to have your profile displayed on the website here.

Interested in helping develop research with CSA?

Related Certificates & Training