CSAIChaptersEventsBlog
Join the Tenable Exposure Management Conference in Boston from May 19–21 to explore modern exposure management and AI risk. Register for EXPOSURE 2026 →

Who’s Behind That Action? The AI Agent Identity Crisis

Published 04/20/2026

Who’s Behind That Action? The AI Agent Identity Crisis

In collaboration with Aembit, CSA has released a new survey report about identity and access for AI agents. The report shows that AI agents are already operating across internal applications, APIs, SaaS platforms, cloud infrastructure, data platforms, and development pipelines. In other words, they are appearing in exactly the places where access decisions matter most.

That growth creates an uncomfortable question: When an AI agent operates in an enterprise environment, who is it, exactly?

The report’s answer is not reassuring.

 

The Identity Gray Area: Where AI Agents Live Today

CSA found that most AI agents do not operate as distinct identities. Instead, they often exist in what the report calls an “identity gray area.” Organizations don't fully treat the agents as human users, but they don't manage them as first-class machine identities either.

Some organizations use application or workload identities. Others rely on shared or generic service accounts. Some even let agents operate under a human user’s identity.

The result is a patchwork of identity treatments applied to AI agents. And when identity is unclear, access control becomes inconsistent. And inconsistency is where risk begins to accumulate.

 

The Hidden Risk of Inherited Permissions

When AI agents borrow identities, they also borrow permissions. Once that happens, the principle of least privilege breaks down.

When AI agents operate under human identities or shared service accounts, they can inherit the attached permissions. And that's whether or not those permissions align with the AI agent’s intended role. Over time, that can expand access scope and introduce policy drift.

This is where AI agent identity becomes an attack surface issue. Nearly three-quarters of the survey respondents agree that AI agents often receive more access than necessary. Even more striking, 79% agree that AI agents introduce new access pathways that are difficult to monitor. Additionally, 81% agree that prompt manipulation could cause an AI agent to reveal sensitive credentials or tokens.

That is a sharp warning for security teams. The risk is not only that an attacker compromises an agent. The enterprise could also give the agent too much access in the first place. They would then struggle to see what the agent is doing with their access.

 

Why Least Privilege Breaks Down in Practice

This keeps happening because of how organizations actually determine access. Only 18% of respondents say they base an AI agent’s access on the agent’s own permissions. More commonly, they tie access to predefined rules or automation logic, the permissions of the human requesting the action, or a shared account.

CSA’s conclusion is that access decisions are frequently anchored in human context or pre-existing automation logic rather than narrowly defined, agent-specific permissions.

That sounds familiar because these are the same IAM challenges organizations have wrestled with for years. The difference now is speed and autonomy. AI agents don’t just hold permissions, but actively use them, chain actions together, and operate at machine scale.

 

When Scale Meets Speed: Expanding the Attack Surface

As AI agents gain more standardized ways to invoke external tools and services, the consequences of inherited permissions grow significantly.

The report notes that architectural patterns allowing agents to interact directly with systems can amplify visibility and privilege issues. This is especially true in environments where identity governance is uneven or decentralized.

At the same time, visibility is lagging behind capability. CSA found that 68% of organizations are not able to clearly distinguish between actions performed by AI agents versus humans.

That creates a dangerous combination: expanding access pathways, limited visibility, and blurred accountability. In security, ambiguity rarely stays harmless for long.

 

The Limits of Reactive Controls

Many organizations recognize the security risks, but their current controls often operate one step too late.

Policy restrictions, human approvals, monitoring agent activity, revoking tokens, or even terminating the compute environment can all help contain damage. However, these approaches are blunt and reactive, stopping activity without necessarily refining the underlying entitlements.

In practice, that means organizations are relying on emergency brakes instead of steering mechanisms.

 

What Identity-Centric Controls Should Look Like

CSA’s findings point toward a more sustainable path grounded in identity-centric controls.

The top capabilities organizations want include:

  • Real-time visibility into agent actions
  • Clear identity separation between AI agents and humans
  • The ability to grant per-task, short-lived access

Taken together, these priorities outline a clear model:

  • AI agents should have their own identities, not borrowed ones.
  • Their access should follow least privilege, aligned to the specific task at hand.
  • Their actions should be fully visible and attributable, so security teams can understand what happened and why.

In summary: apply core IAM discipline to a new class of actor.

 

From Borrowed Identity to Secure Autonomy

That may be the most useful insight in the report.

AI agents are new, but the principles for controlling them are decidedly not. The problem is that many organizations have not yet extended those principles consistently. Instead, they are adapting older identity controls that were never designed for autonomous, tool-using systems.

The result is a mismatch between what agents can do and how organizations govern them.

If your organization is moving quickly with agentic AI, this is the moment to ask a few hard questions.

 

Key Questions for Security Teams

  • Do you have insight into which actions come from humans versus AI agents?
  • Does each AI agent have a distinct identity?
  • Are permissions scoped to the agent’s role, or inherited from something broader?
  • Can you revoke access in a precise way, or only with a blunt instrument?
  • Are your controls embedded at the identity layer, or layered on afterward?

CSA’s research makes clear that AI agents are already operating across core enterprise systems. Their strategic importance is only growing.Cover of Identity and Access Gaps in the Age of Autonomous AI

The organizations that get ahead of this will not be the ones with the flashiest AI deployments. They will be the ones that stop treating AI agents like extensions of users or service accounts. They will treat agents like what they increasingly are: operational actors that require clearly defined identity, tightly scoped access, and continuous visibility.

 

Read the Full Report

This blog only scratches the surface. For a deeper dive into AI agent identity, governance gaps, and practical control strategies, read the full CSA report.

Share this content on your favorite social network today!

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates