ChaptersEventsBlog
Get Free Early Access to TAISE Module 3! Sample the Certificate Experience Today!

A Look at the New AI Control Frameworks from NIST and CSA

Published 09/03/2025

A Look at the New AI Control Frameworks from NIST and CSA
Written by Ken Huang, CEO & Chief AI Officer, DistributedApps.ai.

1: Introduction

It’s hard to keep up. One minute you’re reading about a mind-bending new AI model, and the next, your team is asking how to deploy it securely. The incredible power of AI—from crafting emails to powering complex analytics—is undeniable, but so are the risks. We're venturing into new territory, and the old security playbooks don't always apply. This leaves a critical question on the table for every IT and security leader: How do we embrace this revolution without getting burned? Fortunately, we don't have to invent the answers from scratch. Two of the security world's heaviest hitters, the National Institute of Standards and Technology (NIST) and the Cloud Security Alliance (CSA), have just stepped into the ring. They've each released frameworks designed to bring order to the chaos and provide a clear path forward for securing AI.

The U.S. National Institute of Standards and Technology (NIST) and the Cloud Security Alliance (CSA) have both recently introduced major frameworks designed to help organizations manage AI-specific security challenges. While they share a common goal, their approaches are distinct and complementary. Let's explore these two crucial initiatives, the NIST Control Overlays for Securing AI Systems (COSAIS) and the CSA AI Controls Matrix (AICM), and understand their unique roles in the AI security landscape.

 

2: NIST’s Control Overlays for Securing AI Systems

First up is the National Institute of Standards and Technology (NIST), which has clearly been listening to the industry's calls for help. They've unveiled their game plan in a new concept paper for a series of "Control Overlays for Securing AI Systems," or COSAIS. The goal here is refreshingly practical: to give organizations hands-on, implementation-focused guidelines for managing the very real cybersecurity risks that come with AI.

screenshot from NIST paper

And they aren't asking us to learn a whole new language. The proposed overlays will be built directly on the widely used NIST Special Publication (SP) 800-53—the same catalog of security and privacy controls that already underpins the security programs of countless federal agencies and private companies. The idea is to adapt a trusted foundation, not start from zero. This approach came directly out of feedback from the community, especially from a workshop in April 2025, where stakeholders made it clear they needed to build on existing frameworks to tackle AI's unique security challenges.

Recognizing that AI isn't a monolith, NIST is breaking the problem down into five initial use cases. They plan to develop specific overlays for everything from the generative AI and LLMs we're all experimenting with, to the predictive AI used in business analytics, and the world of single and multi-agent AI systems. There's also a dedicated track just for the developers building these powerful tools.

This new initiative doesn't exist in a vacuum. It’s designed to work hand-in-glove with other major NIST efforts, like the foundational AI Risk Management Framework (AI RMF) and the upcoming Cybersecurity Framework Profile for AI. Together, they form a more complete picture of how to manage AI risk from the boardroom to the server room.

By leveraging standards we already understand, NIST is aiming to give organizations a clear and consistent way to protect the confidentiality, integrity, and availability of their AI systems and the data that fuels them. They are keeping the doors open for feedback on this concept and plan to release the first public draft for comment in early 2026.

 

3: CSA AICM Bundle

screenshot of AI Controls Matrix download page

The CSA AICM bundle is a comprehensive toolkit released by the Cloud Security Alliance (CSA) in July 2025, designed to provide robust security, governance, and compliance controls tailored specifically for generative AI systems. I had the opportunity to lead some parts of this giant effort as Co-chair of the AI Control Working Group, guiding its development and helping shape its direction. At its core, the AICM—short for AI Controls Matrix—is an actionable, vendor-agnostic framework that creates a structure for managing risks and establishing best practices throughout the entire lifecycle of AI, encompassing everything from infrastructure operations to application development.

The AI Controls Matrix consists of 243 individual controls distributed across 18 security domains. These domains cover a range of topics, including traditional information security fields like Identity & Access Management, Incident Response, and Data Privacy, as well as areas uniquely relevant to artificial intelligence such as Model Security, Data Lineage, Bias Monitoring, and Supply Chain Risk Management. This wide scope ensures organizations have granular, practical guidelines to address both long-established and emerging threats in AI environments.

Underlying the AICM bundle are five foundational pillars. The first, Control Type, distinguishes controls as AI-specific, applicable to both cloud and AI, or unique to cloud systems. The matrix also assigns clear ownership and applicability, mapping accountability to relevant stakeholders, such as model providers, orchestrators, application developers, cloud service providers, and end users. Controls are further mapped to different architectural layers—including physical, network, computation, storage, application, and data—to ensure relevance and coverage throughout the technical stack. Lifecycle relevance is addressed by specifying how controls apply during the various phases of AI development, from preparation and conception through development, validation, deployment, delivery, and retirement. Finally, the matrix categorizes controls according to a range of prevalent AI threats, such as model manipulation, data poisoning, sensitive data exposure, model theft, supply chain vulnerabilities, and loss of governance or compliance.

The bundle itself contains several components. Besides the main AICM matrix, it includes the AI-CAIQ, which is a Consensus Assessment Initiative Questionnaire tailored for AI. This questionnaire supports self-assessment by organizations as well as third-party vendor evaluations, creating a baseline for determining AI security and readiness for compliance certifications. To assist organizations in putting theory into practice, the bundle also provides in-depth implementation guidelines for every control, along with robust auditing best practices. For organizations operating internationally or under regulatory oversight, the bundle features explicit cross-mappings to key global standards—such as ISO 42001, ISO 27001, NIST AI RMF 1.0, the BSI AI C4 framework, and the EU AI Act—making it easier to align AI security strategies with existing legal and compliance requirements.

One major advantage of the CSA AICM bundle is its adaptability and breadth. Organizations can use it to conduct rigorous self-assessments, evaluate vendors, define or refine AI governance policies, and prepare for future industry certifications. This toolkit is strategically positioned as the foundation for CSA’s expanding STAR (Security, Trust, Assurance, and Risk) program to AI STAR program, which is set to include dedicated AI certifications and attestation pathways in the near future.

Domains addressed within the matrix include not only critical foundational areas like Data Security & Privacy, Governance, Risk & Compliance, and Incident Response, but also specialized AI topics, such as Model Robustness, Adversarial Defenses, AI Lifecycle Management, & Accountability. This makes the AICM bundle a forward-looking and rigorously crafted solution for modern organizations aiming to responsibly scale their use of artificial intelligence across operational environments.

 

4: The Need for Both Frameworks

The existence of both the NIST Control Overlays and the CSA AI Controls Matrix (AICM) is essential because they cater to different organizational needs and perspectives within the rapidly evolving AI landscape.

  • NIST's Government and Broader Industry Focus: NIST's work is foundational for U.S. federal agencies and is widely adopted and respected across various industries. The SP 800-53 controls are a well-established standard for general cybersecurity. By creating AI-specific "overlays" for this existing framework, NIST provides a familiar and trusted pathway for a broad range of organizations, particularly those already aligned with NIST standards, to adapt their current security practices for AI.
  • CSA's Cloud-Centric, Vendor-Agnostic Approach: The CSA specializes in cloud security. Given that a vast number of AI systems are developed and deployed in the cloud, the AICM offers a framework specifically tailored to these environments. It is designed to be vendor-agnostic, providing a common set of security and governance principles for organizations that use various cloud services to build and run their AI technologies.

 

5: Overlapping Goals with Different Focuses

While both frameworks share the goal of securing AI, they travel different roads to get there. Their differences become clear when you look at their foundations, their intended audience, and what you actually get when you use them.

On one hand, NIST’s approach is about evolution, not revolution. Its foundation is the widely-adopted NIST Special Publication (SP) 800-53, a comprehensive security and privacy catalog that already serves as the backbone for U.S. federal agencies and countless private companies. The primary goal here is to provide a series of practical, implementation-focused guidelines by helping organizations select, tailor, and supplement these existing controls for specific AI use cases. This means the structure isn't a single massive document, but a library of targeted "overlays." NIST is starting with five proposed use cases, covering generative AI, predictive AI, single and multi-agent systems, and even guidance specifically for AI developers.

In contrast, the Cloud Security Alliance has built its framework from a cloud-native perspective. The foundation for the AICM is the CSA's own Cloud Control Matrix (CCM), ensuring its DNA is tailored for cloud environments from the start. Its primary goal is to offer a vendor-agnostic control objectives framework to help organizations securely develop and deploy AI technologies specifically in the cloud. The structure reflects this comprehensive goal: it’s a detailed matrix containing 243 control objectives organized across 18 distinct security domains, covering everything from governance and risk to model security and data privacy.

These different approaches naturally cater to different audiences. NIST casts a wide net, targeting the vast ecosystem of organizations—both public and private—that already align with its cybersecurity frameworks. The guidance is then further segmented for different roles within that ecosystem, including cybersecurity practitioners, AI users, and developers. The unique focus for NIST is on providing a familiar path forward, allowing organizations to customize and prioritize security by leveraging the controls and processes they already have in place.

The CSA, meanwhile, has a laser-focused audience: organizations that are building and running AI in the cloud. This includes everyone from CISOs and risk officers to the cybersecurity practitioners on the front lines. The unique focus and real power of the AICM is its deep dive into the complexities of the cloud's shared responsibility model. It provides much-needed clarity on control applicability and ownership across the different layers of the AI stack—from the cloud service provider to the model provider and the final application provider.

Finally, what you get—the key deliverables—also differs. With NIST, the final product will be a series of distinct control overlay documents, each one a playbook for a specific AI use case. With the CSA, you get the "AICM Bundle," which is more like a complete toolkit. It includes the core controls matrix, a Consensus Assessment Initiative Questionnaire for AI (AI-CAIQ) for conducting assessments, and valuable mappings to other critical standards and regulations, such as ISO 42001 and the EU AI Act.

The following table summarizes the differences: 

Feature

NIST Control Overlays for Securing AI Systems (COSAIS)

CSA Artificial Intelligence Controls Matrix (AICM Bundle)

Foundation

Built upon the existing and widely adopted NIST SP 800-53, a comprehensive catalog of security and privacy controls.

Built on the foundation of the Cloud Control Matrix (CCM), specifically for cloud environments.

Primary Goal

To provide a series of implementation-focused guidelines by selecting, tailoring, and supplementing existing SP 800-53 controls for specific AI use cases.

To offer a vendor-agnostic, control objectives framework to help organizations securely develop, implement, and use AI technologies in the cloud.

Structure

A library of overlays for five initial proposed use cases, including generative AI, predictive AI, AI agents (single and multi-agent), and AI development.

A comprehensive matrix of 18 security domains containing 243 control objectives, covering areas from audit and assurance to threat and vulnerability management.

Audience

Broad, including U.S. federal agencies, and any organization that currently uses or plans to use NIST cybersecurity frameworks. The audience is further segmented by specific use cases, targeting cybersecurity practitioners, AI users, and developers.

Organizations developing and deploying AI systems in the cloud, with a focus on CSA members and the broader cloud industry. The audience includes cybersecurity senior leadership, CISOs, risk officers, and practitioners.

Unique Focus

Customization and prioritization of existing, well-known controls for specific AI applications. The approach is to adapt what many organizations already have in place.

A detailed, cloud-native perspective on AI security and governance. The AICM provides a new set of controls specifically created for AI in the cloud, addressing aspects like control applicability and ownership across different service delivery models.

Key Deliverables

A series of control overlay documents, each tailored to a specific AI use case.

The AICM bundle, which includes the controls matrix, a Consensus Assessment Initiative Questionnaire for AI (AI-CAIQ), and mappings to other standards like ISO 42001 and the EU AI Act.

 

6: Stronger Together

Ultimately, the frameworks from NIST and the CSA should not be viewed as competing standards but as complementary resources. NIST extends a trusted, universal cybersecurity framework into the AI domain, allowing organizations to build upon their existing security programs. The CSA, in contrast, has created a specialized framework from a cloud-first perspective, addressing the unique challenges of securing AI in a multi-layered, shared-responsibility cloud environment.

By leveraging both, an organization can develop a truly comprehensive AI security strategy—using NIST's overlays to adapt its enterprise-wide security posture and the CSA's AICM to drill down into the specific risks and control requirements of its cloud-based AI deployments. Together, they provide a powerful toolkit for navigating the new frontier of AI security.

To learn more, you can access the source documents directly:

Disclaimer: Please note that the views and opinions expressed in this article are solely those of the author and do not necessarily represent the official policy or position of NIST, the Cloud Security Alliance (CSA), or Distributedapps.ai.

Share this content on your favorite social network today!

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates