ChaptersEventsBlog
Card testing is hitting revenue, not just fraud. What should payment companies do now? Register for this March 10 webinar →

Core Collapse

Published 02/26/2026

Written by Rich Mogull, Chief Analyst, CSA.

 

How AI is blowing cybersecurity apart, taking us back to our beginnings, and reforging our foundations.

A star dies slowly. Then all at once.

A star lives billions of years in tension. Thermal energy from fusion in its core pushes outward against gravity pulling inward. It burns through its elements from hydrogen to helium, helium to carbon, then neon, oxygen, silicon, and finally iron. But iron does not release energy when fused; it requires it. The core hardens with iron while fusion continues in surrounding shells, and the outer layers expand, sometimes engulfing nearby planets. Inside, the engine is failing. Without new energy from the core, pressure support weakens. Gravity takes control. In less than a second the core collapses, a shock wave drives the outer layers into space, and something new is left behind.

It’s called Core Collapse.

In 2012 I wrote Inflection to describe the changes I thought would define security for the next decade. Our shift to faster detect/response cycles, increased segregation, and greater security automation.

In 2017 I wrote Tidal Forces to describe the forces pulling apart existing security models. IaaS changing datacenters, SaaS taking over our backoffice apps, and changing endpoint security.

In 2026 our current security models are burning through their core as the inexorable forces of AI accelerated threats drive a collapse. But this doesn’t need to be completely destructive, instead I see transformation through compression as the forces drive us back to fundamental principles.

We don’t know exactly how this push-pull will work out, but we do know enough to see the shape of things. This will happen even if (when) the AI economic bubble bursts. Artificial Intelligence companies and features will come and go, but the technology is here to stay.

 

The Mathematics of the Attackers’ Advantage

AI empowers attackers more than defenders.

It’s a matter of the problem space. Attackers control the bounding of their problem space. Find a vulnerability in a code base. Execute an attack against a set of targets. Within that space AI improves the capabilities of a low-skilled attacker. It improves the scope of a highly-skilled attacker. And it increases the potential resources (via autonomous agents) of any attacker.

Attackers face a search problem. Find an exploitable path to an objective within these constraints. AI is exceptionally good at searching bounded spaces.

Defenders have less control over their bounding space. They need to defend all resources at all times against all attackers. Even when AI hardens code, builds patches, and improves threat detection, the overall problem space is nearly unbounded with compounding complexity.

Defenders face a combinatorial complexity problem. Defend x resources from y attacks over an unlimited timeframe.

An attacker controls the number of attackers, the skills of the attacker, the target/scope of the attack, the speed of an individual attack, and the time to devote to the attack. Using AI agents, an attacker can increase the number of attackers, increase the average attacker capability (skills), and run the attacks quickly, automatically, and continuously against their selected attack surface.

In undifferentiated attacks they can automatically look for a small subset of flaws across… the entire Internet.

In differentiated attacks they can focus resources to explore the resource graph. Shifting from “find all instances of x” to “hammer on y pathways until you reach an objective.”

Defenders? They still need to protect it all. From any potential attack. From everyone. Forever.

Even if any given AI agent equally empowered attackers and defenders, the attackers only need enough resources to run against their selected attack surface. The defenders need to run it to protect their entire defense surface which is essentially all resources.

 

The Defender’s Model Collapse

In the past decade we’ve materially improved multiple facets of security. Our incident detection and response are light years more advanced. Vulnerability management is materially better than when I started. Our endpoint tools are more effective, we are slowly improving our network segmentation, and we even have a good handle on cloud security.

The core of these advances are our current security model: find a bad thing, fix the flaws, stop the attacker. Perimeter security is still critical, but we rely on it less since we know it can’t stop all the attacks. Plus, attackers search the graph for a path, such as popping a developer’s laptop to gain access to the cloud instead of targeting the cloud directly.

AI could wipe out many of these advancements as exploit development time collapses below defensive response cycles, attack iteration becomes continuous, and attacker AIs systematically enumerate and traverse reachable attack graphs.

Assume that we have a given undiscovered vulnerability in a piece of common software. Assume that an attacker and the defender discover it simultaneously (using AI or otherwise). With AI the attacker can rapidly develop an exploit, pass it on to an AI agent to run an attack, and that agent can swarm with others to search the graph and find the path using additional chained exploits.

The defender uses their AI to develop a patch. But they need to deploy it to all affected resources. And run through tests and staging to make sure they don’t take down prod. And maybe the patch is in a COTS product and they have to wait for patch distribution from the vendor. So maybe they develop a threat detector (using AI) for that new exploit, or a new firewall rule, but they still need to test and deploy both. Or maybe the vuln is in the firewall.

The attacker’s cycle is much faster because it’s a bounded problem space. They don’t have to worry about technical debt, and they don’t have to keep a public facing prod website running.

Defenders can no longer rely as strongly on patching. Nor IPS or endpoint signatures. They can’t rely on having enough threat detectors and responding to known exploit scripts or human speed attacks. Response-focused defenses are less effective when facing an ever escalating game of battle bots where offensive and defensive AIs compete in co-evolutionary cycles with compressed timescales.

How do we defend when every day is day zero?

We do have the answers, but it isn’t where most of us are focused today. If anything, it’s closer to our roots.

 

Unstable Ground for Unreliable Agents

Enterprise AI adds more surface to defend from new attacks, expanding the defender’s problem space.

As we increasingly adopt AI, our efforts will also need to shift to a different flavor of data and application security. Research has shown that AI models may be more vulnerable to grounding attacks than most people realize. Research from Anthropic showed that even large frontier models can be poisoned with only 250 malicious documents. SEO efforts are already shifting to using indirect prompt injection and other data-based techniques to manipulate model results. Attackers are already embedding malicious prompts in websites and downloadable skills to exploit the LLMs.

This intersects with data gravity as enterprises consolidate and centralize data to fuel their AI applications. AI’s endless appetite for data creates increasingly attractive targets not only for theft, but for manipulation. Poisoning an AI data store to manipulate results can be as impactful as owning the model or infrastructure itself. And for highly advanced attackers, subtly altering a foundation model can have profound effects throughout the monoculture.

Sitting astride these gravity wells of data are AI agents that are controlled with language. Human language, which is effectively an unbounded protocol. Every previous technology had a defined protocol you could write rules against, but AIs work with the infinite space of language.

Trick the AI via a grounding attack or prompt injection and you can instruct that agent to do anything it has permission for. Remote code execution expands beyond bounded vulnerability classes into unbounded text input. Gain remote code execution by socially engineering a machine.

Unbounded problems suck. But the good news is that grounding, at least where we control the data, is a bounded, defensible space.

 

AI Below the Security Poverty Line

Wendy Nather coined the term the Security Poverty Line to highlight the differences between the haves and the have nots. Between a school system, regional hospital, or local credit union versus a global financial or technology company. Below the poverty line organizations lack budget, expertise, capabilities, and influence. More organizations live below it than above it.

Think of the challenges for an organization that doesn’t have a patch management team. Or an application security team. Or an endpoint security team. They are often limited to less than a handful of security professionals, or security is someone’s “other duty as assigned.”

Their defense surface is smaller, but their security resources are often exponentially smaller. And the odds are against them affording any of the fancy new AI tools. They are already the primary target for ransomware and cryptomining; now they’re facing script kiddies with AI agents capable of finding 0-days and operating with the skills of at least a mid-level red teamer.

Put a pin in this for now, we’ll be back.

 

Respond with Security Boundaries

Security isn’t collapsing, just a big chunk of how we do it today. We don’t need to wait for some fancy new innovation, we can return to our roots.

Instead of trying to out-detect and out-respond AI-speed attacks, we can warp the math back into our favor by building structurally resilient systems with deterministic boundaries even around non-deterministic systems. There are two facets to this strategy:

  • Every security boundary we add between an attacker and their objective compounds their problem space. Yeah, it’s defense in depth but at a level we haven’t generally implemented. We need to add boundaries throughout the application stack, increasing the attackers combinatorial complexity.
  • We place deterministic boundaries around our non-deterministic AI agents and models. We constrain the capabilities of the agent to align with its purpose, reducing the blast radius of a prompt or grounding attack.

If you’ve been developing to modern architectural patterns, you are likely already on the right path. Microservices, APIs, and PaaS are all amenable to inserting additional security boundaries that increase the structural difficulty for an attacker. We need to add boundaries throughout our infrastructure and leverage segmentation and isolation. We can also improve our detection boundaries with greater use of active techniques like canaries and honey tokens.

For AI-powered applications we can focus on cryptographically-sound identity chaining capable of supporting transitive trust (on-behalf-of). Fancy words, but if you’ve used AWS service roles you’ve already seen this in action. We bound what an agent can do with task-based authorizations with tools like OPA and Cedar-style declarative policies. We add in deterministic guardrails for prompt injection defense and output filtering. We sandbox our agents.

I don’t want to over-simplify things. Aside from the complexity of implementing new boundaries into existing environments, there are some consolidated choke points that are harder to defend, and already commonly used in attacks, such as our identity providers.

Zero trust or zero days, your call.

 

Gravitational Collapse to Centralization

Above I listed some architectural responses, but for many organizations, especially those below the security poverty line, the answer will be an organizational response.

The math I’ve outlined becomes an intractable problem for organizations that can’t afford dedicated security teams. The asymmetry of the AI benefits for attackers and defenders are impossible to realize for most small and mid-sized organizations… if they have to do it themselves.

Under-resourced organizations can choose between being repeatedly breached or outsource their security to someone better-resourced. And they won’t really be able to just outsource the security function, they’ll need to outsource their applications and hosting to companies that can defend at scale.

This will drive massive consolidation onto hosted platforms and (capable, AI-powered) MSSPs. This is the gravitational collapse: the industry shedding its outer layers as organizations that tried to do everything themselves consolidate onto denser, more capable centralized cores. Today's MSSPs will transform from human-powered operations with some automation to AI-powered security operations with human oversight, a fundamentally different business and cost model.

While this is the right decision for an under-resourced organization, it starts to create consolidated systemic risk. I don’t see an alternative, and we’ll need to rely on economic forces like cyberinsurance and market dynamics since I don’t see the regulatory environment changing anytime soon.

 

Collapse is Inevitable; Demise is Not

The core, as I write this, is burning the last of its silicon and preparing for gravitational collapse. I’ve seen way too many examples of AI-driven vulnerability discovery, exploit development, and automated attack graphs to think things won’t change quickly, no matter how soon enterprises adopt AI themselves.

We’re figuring out our potential AI business cases, while our adversaries are already refining their AI-powered business operations.

But we aren’t flying blind. We can see the shape of things. We don’t need to dominate the AI arms race, we need to execute on what we already know: architectural resilience, defense in depth, Zero Trust, and generally increased segmentation and isolation. We expand and complicate the attacker’s problem space using security fundamentals instead of trying to outrun them.

After the core of the star collapses, what remains is the densest, most resilient matter in the universe that becomes either a black hole or a neutron star. It’s stronger, not weaker, and might just last until the heat death of the universe.

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates