CSAIChaptersEventsBlog

AARM: Finding a Path to Secure the Agentic Runtime

Published 04/30/2026

AARM: Finding a Path to Secure the Agentic Runtime
Written by Jim Reavis, Co-founder and Chief Executive Officer, CSA.

Over the past year, I have found myself returning to the same observation in many different conversations: we are not simply watching AI improve. We are watching a new operational layer in computing emerge in front of us. Autonomous agents are beginning to write code, manage infrastructure, process transactions, interact with enterprise systems, and make decisions in environments that were previously governed almost entirely by human judgment and deterministic software.

At the same time, frontier models are not improving in a smooth, linear fashion. They are making step-level advances that rapidly change what these agents can do. This is why I have been describing the cybersecurity challenge in terms of two exponentials: the growth in model capability and the viral adoption of agents. Each trend is important on its own. But where they intersect, we are confronted with a security problem that does not fit neatly into the frameworks we have relied on for the past two decades.

The question we kept coming back to at Cloud Security Alliance and CSAI Foundation was deceptively simple: how do you secure what an autonomous system actually does? That question has led us through research, working groups, hands-on experimentation, and many discussions with practitioners who are trying to secure real agentic systems before the industry has settled on mature patterns for doing so.

Our first instinct, naturally, was to map agent security into familiar paradigms. Identity matters. Access control matters. Zero Trust matters. Governance frameworks matter. None of these go away. In fact, they become even more important. But as we explored the problem more deeply, it became clear that something fundamental was missing. Agents do not merely access systems. They act within them.

That distinction is critical. An agent may interpret context, form a goal, call multiple tools, chain together workflows, and adjust its behavior based on intermediate results. The traditional security question of “should this user have access?” is still necessary, but it is no longer sufficient when the “user” is an autonomous system making real-time decisions across an enterprise environment. What matters is not only whether an agent has access, but whether a specific action is appropriate in a specific context, whether it aligns with the intended objective, whether its behavior remains within acceptable boundaries as the workflow unfolds, and whether we can intervene when it begins to drift.

That realization reframed the problem for us. We were not just dealing with identity, infrastructure, application security, or AI governance in isolation. We were dealing with runtime governance of autonomous action. That is a different level of abstraction, and it requires a security model that can reason about what agents are doing while they are doing it.

As we worked through this problem space, one framework kept surfacing as a promising way to organize the discussion: Autonomous Action Runtime Management, or AARM. What we found compelling about AARM is that it starts with the action itself. Rather than looking only at the identity of the actor or the system being accessed, AARM asks how actions should be evaluated and governed at runtime across dimensions such as context, intent, policy, and behavior.

That language maps well to the reality we are seeing. Agentic actions depend heavily on situational context. Intent is often inferred rather than explicitly declared. Policies need to become more adaptive. Behavior must be observed continuously, not simply approved once at the point of access. AARM does not solve every problem, and we should be careful not to overstate its maturity. But it gives us a practical structure for describing the class of problems that agentic systems introduce. It feels less like forcing a new technology into old categories and more like meeting the problem where it actually exists.

It is also important to be clear that AARM is a work in progress, and that is not a weakness. The agentic runtime security space is moving too quickly for anyone to credibly claim that we already have a finished answer. New architectures, deployment models, tool interfaces, and failure modes are emerging constantly. A static framework created too early would almost certainly miss important lessons from the field.

The value of AARM is that it can develop as a living framework. It can be shaped by practitioners, researchers, enterprises, AI builders, and security teams that are actively learning how these systems behave in production-like environments. This is the right posture for CSA: not to pretend that the problem is solved, but to create a vendor-neutral place where the industry can converge on better concepts, shared language, and practical guidance.

One of the most encouraging aspects of this effort is that AARM did not emerge in isolation. It has been shaped by industry engagement and by people who are directly grappling with agentic security challenges. We are especially pleased that Herman Errico of Vanta will continue to chair this work as AARM becomes part of the Cloud Security Alliance research portfolio. Herman brings a practical, builder-oriented perspective that is essential for a field changing this quickly, and his continued leadership gives the effort both continuity and momentum.

We also want to thank Vanta for supporting AARM’s transition into a CSA-led initiative. That kind of openness matters. When a useful framework is brought into a broader, vendor-neutral community, it gives the entire industry a better chance to learn together. No single company is going to solve agentic security on its own. The problem is too complex, the deployment patterns are too diverse, and the pace of change is too fast.

As part of the CSAI Foundation’s broader mission to secure the agentic control plane, AARM gives us an important foundation for thinking about runtime enforcement and governance. It connects naturally with other CSA and CSAI efforts, including the AI Controls Matrix, STAR for AI, RiskRubric, and the Agentic Trust Framework. But perhaps more importantly, it gives us a concrete place to continue the conversation about how autonomous systems should be governed while they are operating.

We are still early. We are still learning how agents behave in real-world environments. We are still discovering new attack surfaces, new control gaps, and new patterns of misuse and failure. We expect to see a whole host of new types of agents inspired by open source projects and industry innovation. We are also learning how to balance safety and usability, because agents will only deliver value if they are both effective and trustworthy. Over-constraining them may defeat their purpose. Under-governing them may create unacceptable risk.

AARM is one step toward answering what I believe will become one of the defining cybersecurity questions of this era: how do we ensure that autonomous systems act in ways that are safe, aligned, accountable, and observable at scale?

We do not have all the answers yet. But we do believe the industry needs a serious, practical, and collaborative effort focused on the runtime behavior of agents. AARM gives us a promising path forward, and we are excited to help develop it with the broader CSA community.

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates