ChaptersEventsBlog

Download Publication

Capabilities-Based Risk Assessment (CBRA) for AI Systems
Capabilities-Based Risk Assessment (CBRA) for AI Systems
Who it's for:
  • Chief Information Security Officers (CISOs)
  • AI governance and compliance leaders
  • Risk management and audit professionals
  • Data protection officers
  • AI product managers and solution architects

Capabilities-Based Risk Assessment (CBRA) for AI Systems

Release Date: 11/12/2025

This publication introduces the Capabilities-Based Risk Assessment (CBRA), a structured, scalable approach to evaluating AI risk in enterprise environments. CSA’s AI Safety Initiative developed this framework to help assess risk based on what a given AI system can do.

CBRA evaluates AI through four core dimensions: System Criticality, Autonomy, Access Permissions, and Impact Radius. It uses these dimensions to calculate a composite risk profile. This enables organizations to align security controls with the true capabilities and potential consequences of each AI deployment.

Mapped directly to the AI Controls Matrix (AICM), CBRA helps enterprises apply proportional safeguards. Low-risk AI gets lightweight controls, medium-risk gets enhanced monitoring, and high-risk gets full-scale governance. The result is a consistent framework for risk-tiered oversight across industries.

As AI becomes more integrated into decision-making, CBRA equips organizations to manage risk at the speed of innovation. Use CBRA to ensure responsible use, regulatory alignment, and public trust.

Key Takeaways:
  • A capability-driven model for AI risk assessment
  • How the risk tiers align with the AICM
  • How to implement scalable, risk-informed AI governance
  • Applications for generative and agentic AI systems across sectors
Download this Resource

Bookmark
Share
Related resources
Data Security within AI Environments
Data Security within AI Environments
Introductory Guidance to AICM
Introductory Guidance to AICM
Beyond the Hype: A Benchmark Study of AI Agents in the SOC
Beyond the Hype: A Benchmark Study of AI Agents...
AAGATE: A NIST AI RMF-Aligned Governance Platform for Agentic AI
AAGATE: A NIST AI RMF-Aligned Governance Platform for Agentic AI
Published: 12/22/2025
Executive Briefing: Hypervisor Ransomware—The Hidden $400 Million Board-Level Exposure
Executive Briefing: Hypervisor Ransomware—The Hidden $400 Million B...
Published: 12/19/2025
Is Cloud-Native Key Management Right for You?
Is Cloud-Native Key Management Right for You?
Published: 12/19/2025
Agentic AI Security: New Dynamics, Trusted Foundations
Agentic AI Security: New Dynamics, Trusted Foundations
Published: 12/18/2025
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Interested in helping develop research with CSA?

Related Certificates & Training