Calibrating AI Controls to Real Risk: The Upcoming Capabilities-Based Risk Assessment (CBRA) for AI Systems
Published 10/27/2025
Governing generative and agentic AI while enabling AI innovation at the same time can feel like whiplash. In the upcoming Cloud Security Alliance (CSA) whitepaper, we introduce the Capabilities-Based Risk Assessment (CBRA). This structured methodology for evaluating enterprise AI risk looks at the capabilities and context of the system, not just its function or output, allowing security teams to right-size controls. The goal is for innovation to scale without surprises.
Join our session at the DataSecAI Conference to get the full rundown of the CBRA.
Why CBRA now?
Traditional checklists fail to keep pace with AI that “thinks, acts, and evolves.” Agentic systems can chain tools, launch jobs, and adapt behavior, all beyond the human line-of-sight. CBRA starts by asking a blunt question: “What is the worst-case scenario if this AI system makes a mistake or someone misuses it?” From there, it quantifies exposure using four multiplicative elements:
System Risk = Criticality × Autonomy × Permission × Impact (blast radius)
- Criticality: What breaks (revenue, safety, compliance) if the service is unavailable, incorrect, or hostile?
- Autonomy: How far can the AI perceive, decide, and act without human approval? Are there circuit breakers and step-up approvals?
- Permissions: What can it read, write, execute, or delegate across your environment and data?
- Impact radius: What’s the maximum plausible harm in one adverse scenario, given the autonomy and permissions you granted?
Because the score is multiplicative, small improvements to autonomy guardrails or permission scopes can materially compress total vendor risk. This results in a practical blueprint for risk reduction as autonomy and integrations expand.
What’s inside the upcoming paper
The paper defines three AI risk tiers (Low, Medium, High). It shows how to map each tier to CSA’s AI Controls Matrix (AICM) so you apply proportional governance. AICM v1.0 spans 18 security domains and ~240 control objectives, including Identity & Access Management, Incident Response, Model Security, and Bias & Fairness. You can use CBRA to decide how much of AICM to apply to your specific scenario, and where to go deeper.
Other features of the full document will include:
- Anchor scales: Concrete scoring guidance for System Criticality, AI Autonomy, Access Permissions, and Impact Radius.
- Examples: Ranging from a low-risk content helper (score 1) to an agent that can rotate IAM policies in production (score 48).
- Alignment with AICM: How Low, Medium, and High CBRA tiers align to graduated AICM control sets.
- Industry scenarios: Side-by-side examples (finance, healthcare, manufacturing) that show how CBRA focuses effort where the blast radius is largest and speeds approvals for low-consequence pilots.
How CBRA plugs into today’s frameworks
CBRA slots into the frameworks that are already here:
- CSA AICM: Use the CBRA to calibrate the breadth and rigor of AICM implementation. E.g., medium-risk systems emphasize validation, monitoring, and transparency; high-risk systems implement virtually all relevant controls in depth.
- NIST AI RMF: Use the CBRA to classify use cases and then select RMF actions proportionally.
- EU AI Act: CBRA’s tiering is philosophically aligned with the Act’s risk-based model and the Commission’s evolving guidance for systems and models with systemic risk. Use CBRA internally to anticipate where high-risk obligations will bite.
What you can do with it on day one
- Standardize vendor reviews: Score autonomy, permissions, and impact for each AI vendor or service to compare unlike systems on a level field. Tie renewals and new integrations to re-scoring.
- Targeted hardening: Use the score to build a risk-reduction roadmap. Shrink active tool scopes, shorten token lifetimes, enforce customer-managed keys, require step-up approvals for sensitive actions, and demand per-tenant isolation with impact tests.
- Proportional governance: Fast-track low-risk pilots with baseline AICM controls. Go deep on high-risk, agentic systems with rigorous validation, red-teaming, auditability, and kill-switches.
Who it's for
- Security, risk, and compliance leaders who need a forward-looking lens to quantify AI exposure at the speed of innovation.
- Procurement and vendor-risk teams tired of one-size-fits-all questionnaires that miss the point for agentic AI.
- Builders shipping AI features who want clarity on what good looks like at Low vs. Medium vs. High risk.
CBRA is designed to evolve alongside the AI systems it measures and to keep your governance balanced between pace and prudence.
If any of this resonates, make sure to check out the announcement session on Day 2 of Cyera’s DataSecAI Conference. Taking place November 12-14, the conference brings progressive cybersecurity, data governance, and business technology teams together so they can supercharge building their AI security programs. CSA's John Yeoh, along with the Chair of the DataSecAI Conference, Pete Chronis, will be discussing the CBRA, which will be released by CSA on the same day.
Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
SASE: Securing the New Enterprise Perimeter with Zero Trust
Published: 10/27/2025
Implementing CCM: Supply Chain Management Controls
Published: 10/24/2025
AI-Integrated Cloud Pentesting: How LLMs Are Changing the Game
Published: 10/24/2025
Science Stymied by Spreadsheets? Modernizing DOE Compliance
Published: 10/23/2025





.png)
.jpeg)

.jpeg)
.jpeg)