Cloud 101CircleEventsBlog
The CCSK v5 and Security Guidance v5 are now available!

Download Publication

Principles to Practice: Responsible AI in a Dynamic Regulatory Environment
Principles to Practice: Responsible AI in a Dynamic Regulatory Environment
Who it's for:
  • C-Suite
  • Cloud security and AI professionals
  • Compliance managers

Principles to Practice: Responsible AI in a Dynamic Regulatory Environment

Artificial Intelligence (AI) innovation is not expected to slow down any time soon, as the big tech giants plan to invest hundreds of billions of dollars into this new technology. The current legal and regulatory landscape is struggling to keep pace. Existing regulations like GDPR and CCPA/CPRA provide a foundation for data privacy, but don't offer specific guidance for the unique challenges and risks of AI.

This publication by the CSA AI Governance & Compliance Working Group provides an overview of existing regulations and their impact on AI development, deployment, and usage, as well as  challenges and opportunities surrounding the development of new AI legislation. It equips individuals and organizations with the knowledge they need to navigate the rapidly changing requirements for responsible AI across regional, national, and international levels.

Key Takeaways: 
  • How existing laws and regulations relate to AI, including GDPR, CCPA, CPRA, and HIPAA
  • The impact of AI hallucinations on data privacy, security, and ethics
  • The impact of anti-discrimination laws and regulations on AI
  • An overview of emerging AI regulations
  • Considerations relating to AI ethics, liability, and intellectual property
  • A summary of technical best practices for implementing responsible AI
  • How to approach the continuous monitoring of AI

The other two publications in this series discuss core AI security responsibilities and a benchmarking model for AI resilience. By outlining recommendations across these key areas of security and compliance in 3 targeted publications, this series guides enterprises to fulfill their obligations for responsible and secure AI development and deployment.
Download this Resource

Bookmark
Share
Related resources
AI Model Risk Management Framework
AI Model Risk Management Framework
CSA Large Language Model (LLM) Threats Taxonomy
CSA Large Language Model (LLM) Threats Taxonomy
Confronting Shadow Access Risks: Considerations for Zero Trust and Artificial Intelligence Deployments
Confronting Shadow Access Risks: Considerations...
Navigating Data Privacy in the Age of AI: How to Chart a Course for Your Organization
Navigating Data Privacy in the Age of AI: How to Chart a Course for...
Published: 07/26/2024
Integrating PSO with AI: The Future of Adaptive Cybersecurity
Integrating PSO with AI: The Future of Adaptive Cybersecurity
Published: 07/23/2024
Enhancing AI Reliability: Introducing the LLM Observability & Trust API
Enhancing AI Reliability: Introducing the LLM Observability & Trust...
Published: 07/19/2024
Data Breach Accountability: Who’s to Blame?
Data Breach Accountability: Who’s to Blame?
Published: 07/16/2024
SECtember.ai 2024
SECtember.ai 2024
September 10 | Bellevue, WA
Are you a research volunteer? Request to have your profile displayed on the website here.

Interested in helping develop research with CSA?

Related Certificates & Training