CSAIChaptersEventsBlog
Join the Tenable Exposure Management Conference in Boston from May 19–21 to explore modern exposure management and AI risk. Register for EXPOSURE 2026 →

Identity and Authorization: The Operating System for AI Security

Published 04/29/2026

Identity and Authorization: The Operating System for AI Security
Written by Kundan Kolhe.

This is the seventh and final blog in a seven-part series on identity security as AI security.

A Replit coding agent erased 1,206 customer records in seconds. In the Salesloft Drift breach, OAuth tokens sat active for months after workflows ended, compromising 700+ organizations. A breach crossed four trust domains before anyone noticed. In Unit 42’s Agent Session Smuggling disclosure, a sub-agent embedded a silent stock trade inside a routine response. Chinese state actors weaponized Claude Code for the first documented large-scale autonomous cyberattack, targeting chemical manufacturing among other sectors. Four CVSS 9.3+ vulnerabilities hit Anthropic MCP, Microsoft Copilot, ServiceNow, and Salesforce, all exposing the same gap: agents that retrieve data under one user’s permissions and broadcast it to audiences that should never see it.

Six different failures. One root cause. Identity and authorization systems that still treat agents like users.

Among the 3,235 enterprise leaders surveyed in Deloitte’s 2026 State of AI report, 73% cite data privacy and security as their top AI risk, yet only 21% have a mature governance model for autonomous agents. Shadow AI adds $670,000 to average breach costs.

The answer is not another security layer bolted on after deployment. It is identity and authorization rebuilt as the operating system for autonomous trust.

 

The Pattern Hiding in Plain Sight

Read individually, each failure in this series looks like a specific gap. An expired token here, a missing scope check there. Read together, a different picture emerges.

A Replit agent deleted 1,206 records in seconds. At 5,000 operations per minute, per-action consent collapsed into consent fatigue.

The Salesloft Drift breach exploited OAuth tokens that sat active months past business justification. With non-human identities outnumbering humans 144 to 1, durable delegated authority without lifecycle controls is the default vulnerability.

Then it got worse. A breach crossed four trust domains because no interoperable trust fabric could verify and revoke access across all of them in real time.

Delegation chains became attack channels. Unit 42’s Agent Session Smuggling, Rehberger’s Cross-Agent Privilege Escalation, and EchoLeak (CVE-2025-32711, CVSS 9.3) all exploited the same gap: no scope attenuation across multi-hop delegation chains. 

Authorization became a safety case. Chinese state actors weaponized Claude Code for what Anthropic described as the first documented large-scale autonomous cyberattack. A poisoned calendar invite hijacked Gemini to control smart home actuators. JLR’s credential compromise shut down factories for five weeks at £1.9 billion.

Finally, four CVSS 9.3+ vulnerabilities hit Anthropic MCP, Microsoft Copilot, ServiceNow, and Salesforce. Same pattern: agents acting on behalf of multiple users with no method for enforcing permission intersections across the output channel.

Strip away the details and the pattern is obvious. Every agent security problem is an identity and authorization problem wearing a different mask. The OpenID Foundation’s Identity Management for Agentic AI whitepaper established the core use cases for agent authorization challenges. This series mapped those challenges to real-world breaches and production security architecture.

 

Why Six Tools Cannot Do the Job of One Layer

Most organizations are approaching agent security the way they approached cloud a decade ago. Token vault here, API gateway there, governance dashboard somewhere else. Each tool solves its own problem. None solves the system.

The gap between deployment and governance is stark. 91% of organizations already use AI agents, but only 10% have a strategy for managing non-human identities. 80% have already encountered risky agent behaviors, per McKinsey. The gap is not awareness. It is architecture.

Now picture the compound failure. Agent Session Smuggling (Blog 4) crosses trust domains (Blog 3) in a long-running workflow with drifting credentials (Blog 2) serving a shared channel with mixed permissions (Blog 6) at 5,000 operations per minute (Blog 1) while controlling physical infrastructure (Blog 5). Six tools that share no context cannot secure interconnected failures. No token vault sees the delegation lineage. No API gateway knows the original human intent. No governance dashboard tracks scope expansion across hops.

The real risk is not shadow AI. It is sanctioned AI with no identity. Shadow agents at least trigger alarms. The agents you officially deployed, connected to production systems, running on shared credentials with no lifecycle management? Those will breach you.

 

What Happens When Identity and Authorization Become the Substrate

The solution is not more tools. It is one layer that all tools share. Agents need to stop being treated like users and start being treated as first-class principals in IAM infrastructure. Not as a best practice. As an architectural requirement.

When every agent action flows through identity and authorization, five properties take hold:

  1. Provenance. Every action traces to an accountable human: through which delegation chain, under what policy, with what scope. This is what broke in Blogs 1 and 4.
  2. Attenuation. When a primary agent delegates to a sub-agent, scope decreases, never increases. Agent Session Smuggling and EchoLeak both exploited the absence of this constraint.
  3. Continuous evaluation. Context shifts, risk changes, intent expires. The Drift breach persisted because authorization was checked once at token issuance and never again.
  4. Lifecycle governance. Employee leaves, their agents get revoked. Workflow completes, credentials expire. The JLR breach turned from an intrusion into a five-week shutdown because none of this happened.
  5. Interoperability. Every solution maps to open standards: OAuth 2.1, OIDC, RFC 8693, CIBA, SCIM, Shared Signals Framework, ID-JAG. The alternative is vendor lock-in at the identity layer, the one lock-in you cannot afford.

 

Three Questions Before Your Next Audit

  1. Can you trace every agent action to its authorizing human? Not which agent. Who, under what policy, with what scope.
  2. Do credentials expire when the work does? The Drift breach started with tokens that should have been revoked months earlier.
  3. Do multi-user agents enforce permission intersection? If your agents operate under the broadest individual scope instead of the narrowest overlap, data exposure is a when, not an if.

 

The Cost of Waiting

The regulatory walls are closing in from three directions, and the penalties are not hypothetical.

The EU AI Act’s high-risk system requirements take full effect in August 2026. Article 14 demands demonstrable human oversight of autonomous systems. Article 99 sets fines of up to 35 million euros or 7% of global annual turnover. Without verifiable delegation chains and audit trails, compliance is structurally impossible.

In the US, the SEC’s Cyber and Emerging Technologies Unit (CETU), launched February 2025, explicitly targets fraudulent cybersecurity disclosure and AI-enabled fraud. The cybersecurity disclosure rule requires reporting material cyber incidents within four business days. An agent-driven breach you cannot explain because no delegation lineage exists is not just a security failure. It is a disclosure problem.

And GDPR has generated 7.1 billion euros in cumulative fines since 2018, with enforcement now extending to AI data processing. When agents retrieve and surface personal data across shared workspaces (Blog 6’s exact failure mode), every uncontrolled exposure is a potential violation.

Anthropic’s 2026 Agentic Coding Trends Report identifies “embedding security architecture from the earliest stages” as a top priority. That is the industry signaling what regulators will soon require.

Picture six months of inaction. Orphaned credentials accumulate. Delegation chains grow undocumented. Scope creep compounds across sub-agents. Then the audit comes, or the breach, and you are doing forensic archaeology through decisions no one recorded. Remediation dwarfs prevention. Regulatory penalties dwarf both.

Agent security is identity and authorization security. There is no other kind.


About the Author

Building Okta’s agentic AI security business from zero to one, a $100M+ opportunity. Leads product strategy and solutioning for Okta’s AI Agents business, a CEO-level initiative defining how enterprises secure AI agents at scale before the market has fully settled. Leads a team of architects and partners with Fortune 500 security and IT executives across financial services, manufacturing, and technology. Shapes the agentic security product roadmap across Okta and Auth0. Brings two decades of experience across product, pre-sales, and marketing at startups and global enterprises across North America, the UK, Australia, and Asia.

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates