Cloud 101CircleEventsBlog

Download Publication

AI Organizational Responsibilities - Core Security Responsibilities
AI Organizational Responsibilities - Core Security Responsibilities
Who it's for:
  • CISOs and Chief AI Officers 
  • Business leaders, decision makers, and shareholders
  • AI engineers, analysts, and developers
  • Policymakers and regulators
  • Customers and the general public

AI Organizational Responsibilities - Core Security Responsibilities

This publication from the CSA AI Organizational Responsibilities Working Group provides a blueprint for enterprises to fulfill their core information security responsibilities pertaining to the development and deployment of Artificial Intelligence (AI) and Machine Learning (ML). Expert-recommended best practices and standards, including NIST AI RMF, NIST SSDF, NIST 800-53, and CSA CCM, are synthesized into 3 core security areas: data protection mechanisms, model security, and vulnerability management. Each responsibility is analyzed using quantifiable evaluation criteria, the RACI model for role definitions, high-level implementation strategies, continuous monitoring and reporting mechanisms, access control mapping, and adherence to foundational guardrails.

Key Takeaways:
  • The components of the AI Shared Responsibility Model
  • How to ensure the security and privacy of AI training data
  • The significance of AI model security, including access controls, secure runtime environments, vulnerability and patch management, and MLOps pipeline security
  • The significance of AI vulnerability management, including AI/ML asset inventory, continuous vulnerability scanning, risk-based prioritization, and remediation tracking

The other two publications in this series discuss the AI regulatory environment and a benchmarking model for AI resilience. By outlining recommendations across these key areas of security and compliance in 3 targeted publications, this series guides enterprises to fulfill their obligations for responsible and secure AI development and deployment.
Download this Resource

Bookmark
Share
Related resources
Confronting Shadow Access Risks: Considerations for Zero Trust and Artificial Intelligence Deployments
Confronting Shadow Access Risks: Considerations...
AI Resilience: A Revolutionary Benchmarking Model for AI Safety
AI Resilience: A Revolutionary Benchmarking Mod...
Principles to Practice: Responsible AI in a Dynamic Regulatory Environment
Principles to Practice: Responsible AI in a Dyn...
The Risk and Impact of Unauthorized Access to Enterprise Environments
The Risk and Impact of Unauthorized Access to Enterprise Environments
Published: 05/17/2024
Automated Cloud Remediation – Empty Hype, Viable Strategy, or Something in Between?
Automated Cloud Remediation – Empty Hype, Viable Strategy, or Somet...
Published: 05/17/2024
Securing Generative AI with Non-Human Identity Management and Governance
Securing Generative AI with Non-Human Identity Management and Gover...
Published: 05/16/2024
Utah S.B. 149: Creating a Safe Space for Developers While Regulating Deceptive AI
Utah S.B. 149: Creating a Safe Space for Developers While Regulatin...
Published: 05/09/2024
CSA Virtual Trust Summit 2024
CSA Virtual Trust Summit 2024
June 6 | Online
SECtember.ai 2024
SECtember.ai 2024
September 10 | Bellevue, WA
Are you a research volunteer? Request to have your profile displayed on the website here.

Interested in helping develop research with CSA?

Related Certificates & Training