ChaptersEventsBlog

Download Publication

Capabilities-Based Risk Assessment (CBRA) for AI Systems
Capabilities-Based Risk Assessment (CBRA) for AI Systems
Who it's for:
  • Chief Information Security Officers (CISOs)
  • AI governance and compliance leaders
  • Risk management and audit professionals
  • Data protection officers
  • AI product managers and solution architects

Capabilities-Based Risk Assessment (CBRA) for AI Systems

Release Date: 11/12/2025

This publication introduces the Capabilities-Based Risk Assessment (CBRA), a structured, scalable approach to evaluating AI risk in enterprise environments. CSA’s AI Safety Initiative developed this framework to help assess risk based on what a given AI system can do.

CBRA evaluates AI through four core dimensions: System Criticality, Autonomy, Access Permissions, and Impact Radius. It uses these dimensions to calculate a composite risk profile. This enables organizations to align security controls with the true capabilities and potential consequences of each AI deployment.

Mapped directly to the AI Controls Matrix (AICM), CBRA helps enterprises apply proportional safeguards. Low-risk AI gets lightweight controls, medium-risk gets enhanced monitoring, and high-risk gets full-scale governance. The result is a consistent framework for risk-tiered oversight across industries.

As AI becomes more integrated into decision-making, CBRA equips organizations to manage risk at the speed of innovation. Use CBRA to ensure responsible use, regulatory alignment, and public trust.

Key Takeaways:
  • A capability-driven model for AI risk assessment
  • How the risk tiers align with the AICM
  • How to implement scalable, risk-informed AI governance
  • Applications for generative and agentic AI systems across sectors
Download this Resource

Bookmark
Share
Related resources
Introductory Guidance to AICM
Introductory Guidance to AICM
Beyond the Hype: A Benchmark Study of AI Agents in the SOC
Beyond the Hype: A Benchmark Study of AI Agents...
Analyzing Log Data with AI Models to Meet Zero Trust Principles
Analyzing Log Data with AI Models to Meet Zero ...
Why AI Won't Replace Us: The Critical Role of Human Oversight in AI-Driven Workflows
Why AI Won't Replace Us: The Critical Role of Human Oversight in AI...
Published: 12/03/2025
The CSA Cloud Controls Matrix v4.1: Strengthening the Future of Cloud Security
The CSA Cloud Controls Matrix v4.1: Strengthening the Future of Clo...
Published: 12/02/2025
Navigating the Liminal Edge of AI Security: Deconstructing Prompt Injection, Model Poisoning, and Adversarial Perturbations in the Cognitive Cyber Domain
Navigating the Liminal Edge of AI Security: Deconstructing Prompt I...
Published: 12/01/2025
The Layoff Aftershock No One Talks About: The NHIs Left Behind
The Layoff Aftershock No One Talks About: The NHIs Left Behind
Published: 11/26/2025
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Interested in helping develop research with CSA?

Related Certificates & Training