Security for AI Building, Not Security for AI Buildings
Published 12/09/2025
AWS re:Invent 2025 Shows What "Shift Left" Can Mean for AI Security
Although I wasn’t at AWS re:Invent in person this year (only the second one I’ve missed since 2013), I sat at home closely following the early “pre:Invent” and official conference announcements. While it’s always risky to extrapolate generalized industry trends from one company’s product announcements, I think we are seeing the earliest signs of a “new” angle on AI security. This isn’t exclusive to Amazon, and I’m not even sure they are thinking about this shift the way I am.
Most AI security today is security for buildings—protecting finished structures with guardrails, content filters, runtime monitoring. Alarms, locks, security guards. There's nothing wrong with this; we'll always need it. But what AWS announced are tools for security during building—decisions made during construction, not after occupancy.
There were a bunch of security feature and product announcements, but three of them clustered together as “building” not “buildings” capabilities:
- IAM Policy Autopilot is an Open Source MCP server that builders can use to analyze their code and auto-create functional IAM policies. It’s a deterministic tool, not an AI agent itself, but being an MCP server is designed to pair with agentic coding tools. As someone who is pretty guilty of giving myself admin access to start and then scoping things down later, having a tool I can run against my code base to create better policies from the start is pretty interesting. (AWS warns it leans towards functionality over minimum permissions, but that still beats “*.* until I get caught” access).
- AgentCore Policy with Evaluations adds Cedar-based policy enforcement to AgentCore. Don’t know what that means? You can start with default deny policies for your agents and then open up capabilities (like what tools it can access). It’s a way of enforcing boundaries around agent actions, built into the same infrastructure fabric that runs the agents. Evaluations adds a bunch of built-in analysis like correctness and safety.
- Nova Forge and Model Checkpoints allows enterprises to blend in their proprietary data during different training checkpoints, combined with Nova training data (Amazon’s model). This is the most interesting of the announcements, and the one that inspired this post, because it potentially allows an enterprise to train a foundation model (at lower cost) to provide the expertise they want without still having general world knowledge, like “how to create a biological weapon”.
- Neither Microsoft nor Google offers checkpoint-level access—they only expose post-trained models for fine-tuning. Nova Forge lets enterprises inject their data earlier in the training process, which is a fundamentally different kind of control.
Nova Forge, plus some other non-security tools, got me thinking about how AWS is starting to focus more on the enterprise tooling to build AI applications and less on creating the latest and greatest models (because they missed that boat by a few miles). AWS isn't alone here—there are third-party tools emerging for all three layers—but Nova Forge is currently unique.
There are, potentially, multiple ways to achieve any desired AI security outcome. Many of the things I’ve looked at put security boxes around the AI boxes. Such as using a prompt filtering guardrail. There’s nothing wrong with this approach, and it will always be a part of our AI security strategy just as we still have firewalls of some sort on every network.
I see three angles emerging to shift more AI security left:
- Model architecture (Nova Forge): Enterprises can choose what knowledge their models contain, potentially reducing attack surface for specific use cases.
- Agent and application architecture (AgentCore Policy): Security controls are enforced at the infrastructure layer “in the fabric”, not just at the prompt layer.
- Development workflow (IAM Policy Autopilot): Security validation is embedded in how code gets written and deployed.
Again, I want to emphasize I’m largely limiting this post to only examples from the recent AWS announcements. Of those, Nova Forge is the only (for now) unique capability (unless you count training from-scratch models), and there are absolutely multiple other first and third party tools for the agent/application architecture and development workflow (especially the development workflow).
As security professionals we tend to be stronger at security for buildings for a lot of very legitimate reasons. We aren’t usually the experts on the technologies we defend, we often rely on tools for multiple purposes (hello the-things-we-call-firewalls-that-aren’t-just-firewalls-at-ALL), and we are frequently brought in late in the game, if at all.
But just as we started bring the Sec into DevSecOps we have an opportunity to adapt our skills and bring tremendous value to AI security. Security for building requires architects, not just security guards. The skill set for deciding what goes into a Nova Forge checkpoint blend is different from the skill set for configuring a guardrail. Most security teams aren't staffed for these decisions yet—and that might be the real shift we need to make. The good news is we're still early. Most enterprises are experimenting with AI, not deploying at scale. The window to build these skills, and these security decisions into the foundation, is open.
Related Resources



Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
Zero Trust for OT in Manufacturing: A Practical Path to Modern Industrial Security
Published: 12/08/2025
AI Explainability Scorecard
Published: 12/08/2025
Microsoft Entra ID Vulnerability: The Discovery That Shook Identity Security
Published: 12/08/2025
Why Compliance as Code is the Future (And How to Get Started)
Published: 12/04/2025


.jpeg)
.jpeg)
.jpeg)
.jpeg)