ChaptersEventsBlog
How is your enterprise using AI Agents? Help us benchmark security and take the survey before November 30 →

Download Publication

AI Organizational Responsibilities - Core Security Responsibilities
AI Organizational Responsibilities - Core Security Responsibilities
Who it's for:
  • CISOs and Chief AI Officers 
  • Business leaders, decision makers, and shareholders
  • AI engineers, analysts, and developers
  • Policymakers and regulators
  • Customers and the general public

AI Organizational Responsibilities - Core Security Responsibilities

Release Date: 05/05/2024

Working Group: AI Safety

This publication from the CSA AI Organizational Responsibilities Working Group provides a blueprint for enterprises to fulfill their core information security responsibilities pertaining to the development and deployment of Artificial Intelligence (AI) and Machine Learning (ML). Expert-recommended best practices and standards, including NIST AI RMF, NIST SSDF, NIST 800-53, and CSA CCM, are synthesized into 3 core security areas: data protection mechanisms, model security, and vulnerability management. Each responsibility is analyzed using quantifiable evaluation criteria, the RACI model for role definitions, high-level implementation strategies, continuous monitoring and reporting mechanisms, access control mapping, and adherence to foundational guardrails.

Key Takeaways:
  • The components of the AI Shared Responsibility Model
  • How to ensure the security and privacy of AI training data
  • The significance of AI model security, including access controls, secure runtime environments, vulnerability and patch management, and MLOps pipeline security
  • The significance of AI vulnerability management, including AI/ML asset inventory, continuous vulnerability scanning, risk-based prioritization, and remediation tracking

The other two publications in this series discuss the AI regulatory environment and a benchmarking model for AI resilience. By outlining recommendations across these key areas of security and compliance in 3 targeted publications, this series guides enterprises to fulfill their obligations for responsible and secure AI development and deployment.
Download this Resource

Bookmark
Share
Related resources
Introductory Guidance to AICM
Introductory Guidance to AICM
Capabilities-Based Risk Assessment (CBRA) for AI Systems
Capabilities-Based Risk Assessment (CBRA) for A...
Beyond the Hype: A Benchmark Study of AI Agents in the SOC
Beyond the Hype: A Benchmark Study of AI Agents...
Securing Application-to-Application Traffic with AI/AGI/ML-Powered Virtual Firewalls: A Comprehensive Framework for Multi-Cloud, Hybrid, and On-Premises Environments
Securing Application-to-Application Traffic with AI/AGI/ML-Powered ...
Published: 11/21/2025
Red Teaming Voice AI: Securing the Next Generation of Conversational Systems
Red Teaming Voice AI: Securing the Next Generation of Conversationa...
Published: 11/20/2025
Understanding STAR for AI Level 2: A Practical Step Toward AI Security Compliance
Understanding STAR for AI Level 2: A Practical Step Toward AI Secur...
Published: 11/19/2025
From Chatbots to Agents: The Evolution Toward Agentic AI
From Chatbots to Agents: The Evolution Toward Agentic AI
Published: 11/13/2025
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Interested in helping develop research with CSA?

Related Certificates & Training