We are Fixing the Wrong Problem in Non-Human Identity Security
Published 04/23/2026
Introduction: The Identity Everyone Ignores
For over two decades, identity security has revolved around a simple assumption, “people are the risk.” We built programs to govern users, authenticate humans, de-provision employees, and enforce access reviews. That model worked until it did not.
In multiple enterprise environments I have worked with over the past few years, the shift did not happen gradually. It happened invisibly, until machine identities quietly became the dominant trust layer without corresponding governance. Today, the most powerful identities in the enterprise are not human. They do not log in. They do not forget passwords. They do not fall for phishing emails. And yet, they now sit at the center of the most damaging breaches of our time. These identities, service accounts, API keys, OAuth tokens, workload roles, CI/CD credentials, bots, and automated agents are collectively referred to as Non-Human Identities (NHIs). This shift toward machine-driven trust has also been explored in prior work on decentralized identity and trust frameworks in industrial environments [1]. NHIs themselves are not the issue. What continues to fail us is how persistently we try to govern them using user-centric models that were never designed for autonomous systems.
The OWASP Wake-up Call And Its Limits
The release of the OWASP Top 10 Non-Human Identities Risks (2025) was an important moment for the security industry. For the first time, a mainstream framework acknowledged what many practitioners had already learned the hard way, machine identities are the primary attack surface in modern environments. [2] [3]
OWASP highlights issues that every security leader has encountered:
- Orphaned service accounts
- Leaked secrets in repositories and pipelines
- Over-privileged workloads
- Unsafe CI/CD integrations
- Weak or deprecated authentication methods
These risks are real, recurring, and repeatedly exploited in the wild. But here is the uncomfortable truth, OWASP correctly identifies the problems, many of which practitioners have struggled with for years. However, it stops short of challenging the underlying mental model that created them in the first place.
The Core Blind Spot: NHIs Are Not “Accounts”
Most NHI discussions treat machine identities as a technical variation of user accounts, something to inventory, rotate, and review periodically.
In practice, NHIs are not accounts. They are autonomous trust executors. This distinction may sound semantic, but in practice it changes how breaches unfold and more importantly, why they are so difficult to contain once they begin. A non-human identity does not simply “have access.” It executes authority continuously, often without a clear owner, across systems that no single team understands end-to-end.
This is why breaches involving NHIs behave differently:
- A single compromised token can cascade across thousands of downstream systems
- CI/CD credentials, once poisoned, can quietly propagate across supply chains
- OAuth tokens can cross organizational boundaries in ways that are often invisible until it is too late
We have seen this repeatedly:
- SolarWinds compromise abused service principals and OAuth roles to achieve persistent, stealthy access across victim environments. [4]
- The 2026 Trivy and GitHub Actions supply-chain attacks weaponized CI identities to steal cloud credentials at scale and infect dependent pipelines. [5][6]
None of these failures were caused by weak passwords or inattentive users. They were the direct result of unbounded machine trust.
Discovery Is Not an Inventory Problem
Most NHI programs begin with a deceptively simple goal, “Let’s discover all non-human identities.” In practice, that goal is already obsolete, and many teams realize this only after investing heavily in inventory-first approaches.
In modern cloud-native and AI-enabled systems:
- Identities are spawned dynamically
- Credentials exist for seconds or minutes
- Trust is delegated at runtime, not provisioned statically
Ephemeral Kubernetes service accounts, short-lived CI tokens, GitHub runner identities, and AI agent credentials do not sit still long enough to be inventoried. The Trivy GitHub Actions compromise in 2026 illustrates this perfectly. Attackers did not exploit a dormant account. They stole runtime credentials from CI runners, reused them within minutes, and fanned out across ecosystems before defenders could react.
Effective NHI discovery must therefore shift from, “What identities exist?” to “Where is authority being executed right now?” That is a fundamentally different problem, and it demands telemetry at the process, workload, and pipeline level, not just IAM databases.
Ownership: Question No Framework Answers
Ask any security team a simple question: “Who owns this service account?” In most organizations, the honest answer is “no one.” I have asked this question in architecture reviews and incident discussions, and the silence that follows is often more telling than any audit finding.
The Internet Archive breach, where long-lived Zendesk credentials sat unused for nearly two years before exploitation, was not a tooling failure, it was an ownership failure. For NHIs, technical ownership is meaningless without economic accountability. Every non-human identity must always answer four questions:
- Why does it exist right now?
- Who actually benefits from its operation?
- If it fails or is abused, who takes the hit?
- And critically, who can shut it down immediately?
If those questions cannot be answered in real time, governance is theoretical.
Least Privilege Is No Longer Enough
“Least privilege” is still a cornerstone of identity security, but on its own, it collapses under autonomy.
In controlled environments, it works well. In dynamic, interconnected systems, it often degrades faster than teams can reassess it. Machine identities do not remain static,
- Privileges accumulate through integrations
- Trust expands through transitive relationships
- Behavior evolves faster than access reviews
In SolarWinds case, service principals were not dangerously privileged on day one. They became dangerous after trust relationships compounded over time. [4]
What NHIs require is not static least privilege, but:
- Purpose-bound access (what outcome is allowed, not which API)
- Time-boxed trust (expiration by default)
- Behavior-driven revocation (kill access when usage deviates)
Until governance shifts from entitlement lists to trust behavior, attackers will continue to exploit the gap.
Conclusion: A Different Way Forward
The industry does not need another checklist. It needs a new foundation:
- Stop treating NHIs as “users without passwords”
- Govern trust execution, not just access grants
- Make identity creation itself a high-risk event
- Attach economic accountability to machine authority
- Design for revocation, not permanence
Non-human identities will continue to grow especially as AI agents, autonomous workflows, and self-healing systems become mainstream. The organizations that succeed will not be the ones with the cleanest inventories. They will be the ones who understand who their machines trust, why, and for how long. That is where identity security is heading, whether we are ready for it or not.
References
- Banerjee, T., & Singh, H. “Securing Non-Human Identities in Industrial IoT: A Blockchain-Based Trust Framework.” NIPES Journal of Science and Technology Research, vol. 7, no. 4, 2025, pp. 228–246. https://doi.org/10.37933/nipes/7.4.2025.1660
- OWASP Non-Human Identities Top 10 (2025) [owasp.org]
- CSO Online: Understanding OWASP’s Top 10 NHI Risks [csoonline.com]
- MITRE ATT&CK: SolarWinds Compromise (C0024) [attack.mitre.org]
- SANS Institute: TeamPCP Supply Chain Campaign (2026) [sans.org]
- The Hacker News: Trivy GitHub Actions Breach (2026) [winbuzzer.com]
About the Author
Tuhin Banerjee is a Senior Practice Director in Identity and AI Security, advising global enterprises on governing digital identity, mitigating AI-driven risk, and securing autonomous systems at scale. With two decades of leadership experience, he helps organizations modernize security programs and build resilient AI governance frameworks. He holds CRISC, CCSP, CISM, CEH, and Generative AI certifications, and is a Fellow of NIPES and Senior Member of IEEE, Sigma Xi, and IETE. Tuhin Banerjee can be reached online at [email protected].

Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
Rethinking Incident Response as an Engineering System: Addressing 7 Operational Gaps
Published: 04/23/2026
Software Supply Chain Security Needs an Upgrade
Published: 04/21/2026
Who’s Behind That Action? The AI Agent Identity Crisis
Published: 04/20/2026
AI Agents Are Talking, Are You Listening?
Published: 04/17/2026




.png)




