CSAIChaptersEventsBlog
Help shape CSA’s Top Threats to Cloud Computing 2026 publication. Take the quick survey →

Every RSAC Keynote Asked the Same Five Questions. Here's the Framework That Answers Them.

Published 04/03/2026

Every RSAC Keynote Asked the Same Five Questions. Here's the Framework That Answers Them.
Written by Josh Woodruff, Founder and CEO of Massive Scale Consulting.

Something unusual happened at RSAC 2026. Not unusual in the "new product launch" sense. Unusual in the "everyone independently said the same thing without coordinating" sense.

Microsoft's Vasu Jakkal: "Zero Trust must extend to AI." Cisco's Jeetu Patel: "Move from access control to action control. Authorize every single action." CrowdStrike's George Kurtz: the biggest governance gap in enterprise technology is around AI. Splunk's John Morgan called for "an agentic trust and governance model."

Four companies. Four separate stages. The same conclusion.

The industry has reached consensus on the problem. But consensus on the problem has never been the hard part. The hard part is the question nobody on those stages answered: how do we actually build it?

 

The Numbers Behind the Consensus

The keynote speakers didn't arrive at the same conclusion by accident. The data pushed them there.

79% of organizations are already using AI agents (PwC, 2025). But 86% of those agents were deployed without security approval (Gravitee, February 2026, surveying 919 organizations). That's a 65-point governance gap. And a CSA survey presented at RSAC found that only 26% of organizations have AI governance policies in place. For a Zero Trust community, that number should be unacceptable. This is our domain.

The attack surface is expanding at a pace we haven't seen before. During RSAC week, CrowdStrike disclosed ClawHavoc: the first documented supply chain attack targeting agentic AI. 1,100 poisoned skills in the OpenClaw marketplace. Cisco independently found that 36% of skills in that same marketplace contain detectable prompt injection. CrowdStrike also reported that average breakout time has dropped to 29 minutes, with the fastest observed breakout at 27 seconds. AI is driving that acceleration on both sides.

But the most consequential data point came from a panel, not a keynote. At the OWASP AIVSS session, NIST's Apostol Vassilev shared peer-reviewed research (forthcoming) establishing mathematically that no finite set of guardrails is universally robust against adversarial prompts. You cannot red-team your way to "secure." You cannot ship enough filters. The defense surface is finite. The attack surface is not.

For Zero Trust practitioners, this should sound familiar. It's the same insight that gave birth to Zero Trust itself: perimeter defense has an upper bound. Assume breach. Design for resilience.

That's exactly what we need to do for AI agents.

 

Five Themes, Five Stages, One Architecture

Across keynotes, panels, and Innovation Sandbox pitches, I watched the same five themes surface independently, over and over.

Agents need verifiable identity. Microsoft, Cisco, CrowdStrike, and Armis all said it. Token Security, an Innovation Sandbox finalist, built their entire company around it. Their pitch crystallized something important: every identity wave creates a platform. SAML for human identity. SPIFFE for machine identity. Agent identity is the next wave. And right now, most organizations can't answer the question "how many agents do we have?" Token discovered 600 ungoverned agents at a single Fortune 500 company in 24 hours. Nobody knew they existed.

Behavior must be monitored in real time. SentinelOne's Tomer Weingarten framed it most sharply: "Behavior is the decisive signal." It's the only way to distinguish benign from malicious at machine speed. Microsoft, CrowdStrike, and Armis described the same requirement from different angles. You can't audit an LLM's weights into trustworthiness. You have to watch what the agent actually does.

Data flowing in and out needs governance. Microsoft and Cisco both stressed this. Ken Huang (OWASP, CSA) described what he calls the "lethal trifecta": access to internal sensitive data, connection to untrusted external data, and the ability to execute actions. OpenClaw has all three by default. If you're not governing what your agents consume and produce, you're flying blind.

Least privilege must extend to actions, not just access. Cisco's Jeetu Patel said it plainly: "For years we've talked about Zero Trust for humans. For agents, move from access control to action control." Just in time permissions. Just enough permission. Just long enough. Then revoke.

Incident response must operate at machine speed. When breakout happens in 27 seconds, human-speed response is too slow. CrowdStrike, Armis, and SANS all emphasized automated containment: kill switches, circuit breakers, and response playbooks that execute without waiting for a human to decide whose job it is to pull the plug.

These five themes aren't new individually. But hearing the entire industry converge on them simultaneously, for AI agents specifically, is new. And they map directly to the five core elements of the Agentic Trust Framework (ATF), published through CSA in February 2026. That mapping isn't a coincidence. ATF was built by applying Zero Trust first principles to autonomous AI agents. The industry is arriving at the same conclusions because Zero Trust principles are universally applicable.

That was always the point.

Zero Trust graphic

 

From Questions to Architecture

ATF translates Zero Trust principles into five questions any organization can ask about any AI agent:

  1. Who are you? Every agent gets unforgeable credentials. Identity verified at every interaction.
  2. What are you doing? Continuous behavioral monitoring. AI watching AI for anomalous patterns.
  3. What are you eating? What are you serving? Input validation, output governance, data lineage. Guard both sides.
  4. Where can you go? Least-privilege boundaries. The agent accesses only what it needs, when it needs it, for as long as it needs it.
  5. What if you go rogue? Kill switches, containment protocols, recovery playbooks. Tested quarterly, not written and forgotten.

The framework is deliberately plain language. A CISO can read it. An engineer can build from it. A board member can understand the gap report. If you can answer all five questions for every agent in your environment, you have a governance architecture. If you can't answer even one, you just found your gap. No consultants needed to figure out where you're exposed.

ATF includes a maturity model with four levels, Intern through Principal, each with explicit promotion and demotion gates. Agents start at Intern (observe only, read-only mode) and earn greater autonomy through demonstrated trustworthiness, passing five gates: performance, security validation, business value, incident record, and governance sign-off. Critically, any significant incident triggers automatic demotion. Autonomy is earned, not granted by default, and it can be revoked in seconds. This operationalizes the "least agency" principle that both OWASP and Forrester have identified as foundational.

Organizations can have the MVP governance stack operational in two to three weeks using open source components, with an enterprise-grade deployment in eight to twelve weeks. Each of the five elements has defined core requirements (25 total across the framework), a phased implementation approach, and compliance mappings to SOC 2, ISO 27001, NIST AI RMF, and EU AI Act provisions. Organizations implementing ATF are simultaneously building compliance evidence.

One design choice is worth highlighting for this audience. Three of ATF's five elements (Behavioral Monitoring, Segmentation, Incident Response) are explicitly about what happens after prevention fails. The framework allocates 60% of its architecture to resilience, not prevention. That's not an accident. It's the direct application of the Zero Trust assumption, assume breach, to agentic AI. And it aligns directly with Vassilev's mathematical proof: if perfect prevention is impossible, resilience is the architecture.

 

Where ATF Sits in the CSA Ecosystem

ATF is one piece of a larger picture, and the relationships matter.

CSA's AI Controls Matrix (AICM) defines 243 controls for AI systems broadly, and won a 2026 CSO Award for good reason. ATF operationalizes the subset of those controls that applies specifically to autonomous agents, adding a maturity model for progressive autonomy that AICM doesn't include. Think of it this way: AICM tells you what controls exist for AI. ATF tells you how to apply them to agents and how to measure whether you've done it.

CSA also launched the CSAI Foundation at RSAC with the mission of "Securing the Agentic Control Plane." That mission describes exactly what ATF operationalizes. As CSAI's programs develop, particularly the Agentic Best Practices program, ATF's role is as the deployable governance model within that ecosystem. CSAI is securing the agentic control plane. ATF is the operating model for governing the agents within it.

ATF also complements the broader framework landscape. MAESTRO identifies what could go wrong at each layer of an agentic system. The OWASP Agentic Top 10 catalogs the highest-impact risks. NIST AI RMF provides the risk management structure. ATF bridges from all of them to implementation: what do you build, and how do you know when you've built it?

The principle is simple: bridge, don't compete. ATF's value increases when it connects to other frameworks. Every crosswalk makes it more useful.

 

The Ecosystem Is Already Building

Within 30 days of ATF's publication on the CSA blog, two independent organizations built implementations against the spec. Neither was solicited. Neither coordinated with the other.

A Senior Architect at Microsoft, running what he describes as Microsoft's "AI Native Team," built the Agent Governance Toolkit: a full governance middleware layer covering all five ATF elements. It's now a project in the microsoft/ GitHub organization (community preview). The toolkit implements a four-layer architecture with deterministic policy enforcement. No LLM in the governance loop. In production testing, 11 specialized agents running concurrently generated over 7,000 governance decisions at sub-millisecond latency across 11 days of continuous operation. The team filed a formal proposal (CSA-ATF-PROPOSAL.md) to engage with the CSA Zero Trust Working Group and contribute back to the spec, including three proposed additions: agent delegation chain verification, AI-BOM integration for model provenance, and a trust scoring quantification methodology.

Separately, Berlin AI Labs submitted a pull request to the ATF specification repository claiming 12 deployed services as a reference implementation. Different organization. Different country. Same spec. No coordination.

In standards-body terms, going from publication to multiple independent implementations in 30 days is essentially instantaneous. That kind of organic adoption validates that the spec addresses a real, felt need. Engineers read it and knew what to build. That was the design goal.

ATF's governance layer doesn't operate in isolation. Through the CSA Zero Trust Working Group, we're developing a five-layer reference architecture that positions ATF's supervisory plane above the connectivity, identity, and agent runtime layers, with enterprise risk and compliance at the top. Each layer enforces independently, meaning a failure at one layer doesn't compromise the others. That reference architecture, including concrete implementation guidance per element for different deployment models, will be the subject of a forthcoming CSA paper.

 

What You Should Do This Month

This week: Inventory your AI agents. You cannot secure what you do not know about. ATF's first six questions (the Identity element) force this inventory. You cannot answer them without knowing what agents exist, what credentials they hold, and what access they've been granted. If Token can find 600 ungoverned agents at a Fortune 500 in 24 hours, what would they find in your environment?

This month: Run the ATF self-assessment. Thirty questions, scored by element, free at agentictrustframework.ai. Every unanswered question is a gap. You don't need a consultant to find out where you're exposed. You need 30 minutes.

This quarter: Brief your executive team on agent-specific risk. The RSAC data gives you the ammunition. 86% of agents deployed without approval. Only 26% of organizations with governance policies. A 65-point gap between adoption and security. Position security as the enabler, not the blocker. The companies deploying agents fastest are the ones with the strongest governance. Security is not the brake pedal. It's the roll cage that lets you take corners at 200 miles an hour.

For the ZTAC community: The Zero Trust Working Group is actively developing ATF reference architecture patterns and framework crosswalks to AICM and the OWASP Agentic Top 10. If you want to contribute, engage through CSA. This is the work.


Every RSAC keynote described the same elephant. ATF gives it a name, a structure, and a way to measure progress.

The framework, the assessment, and the spec are free and open. The only cost is inaction. And inaction is a 65-point gap between how fast we're deploying agents and how well we're governing them.

That gap is where every AI agent disaster lives. Let's close it.


About the Author

Josh Woodruff is Founder and CEO of MassiveScale.AI, a CSA Research Fellow, co-chair of the CSA Zero Trust Working Group, and IANS Research Faculty. He is the author of "Agentic AI + Zero Trust: A Guide for Business Leaders" with a foreword by John Kindervag, creator of Zero Trust. The Agentic Trust Framework specification and self-assessment are available at agentictrustframework.ai.

Share this content on your favorite social network today!

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates