Cloud 101CircleEventsBlog
Join us for Cybersecurity Awareness Month! Strengthen your cyber resilience with essential security tips and resources for everyone.

Download Publication

AI Organizational Responsibilities - Core Security Responsibilities - Korean Translation
AI Organizational Responsibilities - Core Security Responsibilities - Korean Translation

AI Organizational Responsibilities - Core Security Responsibilities - Korean Translation

Release Date: 09/24/2024

This localized version of this publication was produced from the original source material through the efforts of chapters and volunteers but the translated content falls outside of the CSA Research Lifecycle. For any questions and feedback, contact [email protected]."

Here's the description from the original artifact publication page you would then include:

"This publication from the CSA AI Organizational Responsibilities Working Group provides a blueprint for enterprises to fulfill their core information security responsibilities pertaining to the development and deployment of Artificial Intelligence (AI) and Machine Learning (ML). Expert-recommended best practices and standards, including NIST AI RMF, NIST SSDF, NIST 800-53, and CSA CCM, are synthesized into 3 core security areas: data protection mechanisms, model security, and vulnerability management. Each responsibility is analyzed using quantifiable evaluation criteria, the RACI model for role definitions, high-level implementation strategies, continuous monitoring and reporting mechanisms, access control mapping, and adherence to foundational guardrails.
Key Takeaways:
  • The components of the AI Shared Responsibility Model
  • How to ensure the security and privacy of AI training data
  • The significance of AI model security, including access controls, secure runtime environments, vulnerability and patch management, and MLOps pipeline security
  • The significance of AI vulnerability management, including AI/ML asset inventory, continuous vulnerability scanning, risk-based prioritization, and remediation tracking

The other two publications in this series discuss the AI regulatory environment and a benchmarking model for AI resilience. By outlining recommendations across these key areas of security and compliance in 3 targeted publications, this series guides enterprises to fulfill their obligations for responsible and secure AI development and deployment.
 
Download this Resource

Prefer to access this resource without an account? Download it now.

Bookmark
Share
Related resources
AI in Medical Research: Applications & Considerations
AI in Medical Research: Applications & Consider...
AI Resilience: A Revolutionary Benchmarking Model for AI Safety - Japanese Translation
AI Resilience: A Revolutionary Benchmarking Mod...
Don’t Panic! Getting Real about AI Governance
Don’t Panic! Getting Real about AI Governance
Reflections on NIST Symposium in September 2024, Part 1
Reflections on NIST Symposium in September 2024, Part 1
Published: 10/04/2024
Embracing AI in Cybersecurity: 6 Key Insights from CSA’s 2024 State of AI and Security Survey Report
Embracing AI in Cybersecurity: 6 Key Insights from CSA’s 2024 State...
Published: 10/04/2024
Secure by Design: Implementing Zero Trust Principles in Cloud-Native Architectures
Secure by Design: Implementing Zero Trust Principles in Cloud-Nativ...
Published: 10/03/2024
AI Legal Risks Could Increase Due to Loper Decision
AI Legal Risks Could Increase Due to Loper Decision
Published: 10/03/2024
CSA Global AI Symposium
CSA Global AI Symposium
October 22 | Virtual
Are you a research volunteer? Request to have your profile displayed on the website here.

Interested in helping develop research with CSA?

Related Certificates & Training