Cloud 101CircleEventsBlog
The CCSK v5 and Security Guidance v5 are now available!

Working Group

AI Controls

This committee aligns with NIST Cybersecurity Framework to establish a robust, flexible, and multi-layered framework.
CSA Large Language Model (LLM) Threats Taxonomy
CSA Large Language Model (LLM) Threats Taxonomy

Download

The AI Controls Committee is dedicated to defining and implementing a comprehensive set of security controls for AI systems. The committee aims to adhere to the top-level functions of the NIST Cybersecurity Framework (CSF): Identify, Protect, Detect, Respond, and Recover. By aligning with these proven guidelines, the committee seeks to establish a robust, flexible, and compliant framework that addresses security risks across multiple layers, from data to application.

Working Group Leadership

Marina Bregkou
Marina Bregkou

Marina Bregkou

Senior Research Analyst, CSA EMEA

Sean Heide
Sean Heide

Sean Heide

Technical Research Director, CSA

Working Group Co-Chairs

Marco Capotondi
Marco Capotondi

Marco Capotondi

Agency for National Cybersecurity, Italy

Marco Capotondi is an Engineer specialized in applied AI, with a focus on AI Security and AI applied to Autonomous Systems. Bachelor’s degree in Physics and Master’s degree in AI Engineering, he got a doctoral degree through a research around Bayesian Learning techniques applied to Autonomous Systems, on which he published many papers. His actual focus is helping the community to define and manage risks associated with Artificial Intelligen...

Read more

Siah Burke Headshot Missing
Siah Burke

Siah Burke

Ken Huang
Ken Huang

Ken Huang

Chief AI Officer at DistributedApps.ai

Ken Huang is an acclaimed author of 8 books on AI and Web3. He is the Co-Chair of the AI Organizational Responsibility Working Group and AI Control Framework at the Cloud Security Alliance. Additionally, Huang serves as Chief AI Officer of DistributedApps.ai, which provides training and consulting services for Generative AI Security.

In addition, Huang contributed extensively to key initiatives in the space. He is a core contributor t...

Read more

Alessandro Greco Headshot Missing
Alessandro Greco

Alessandro Greco

Publications in ReviewOpen Until
Guidelines for Auditing AIJul 31, 2024
Data Privacy Engineering Working Group Charter 2024Aug 08, 2024
Using Asymmetric Cryptography to Help Achieve Zero Trust ObjectivesAug 12, 2024
Don’t Panic! Getting Real About AI GovernanceAug 25, 2024
View all
Who can join?

Anyone can join a working group, whether you have years of experience or want to just participate as a fly on the wall.

What is the time commitment?

The time commitment for this group varies depending on the project. You can spend a 15 minutes helping review a publication that's nearly finished or help author a publication from start to finish.

Open Peer Reviews

Peer reviews allow security professionals from around the world to provide feedback on CSA research before it is published.

Learn how to participate in a peer review here.

Guidelines for Auditing AI

Open Until: 07/31/2024

Guidelines for Auditing AI presents a comprehensive framework for auditing AI systems, emphasizing the need for tr...

Data Privacy Engineering Working Group Charter 2024

Open Until: 08/08/2024

The Data Privacy Engineering Working Group (DPE WG) is chartered with the mission to integrate privacy-centric methodologie...

Using Asymmetric Cryptography to Help Achieve Zero Trust Objectives

Open Until: 08/12/2024

This paper investigates the convergence of asymmetric cryptography and Zero Trust architecture, exploring the utilization o...

Don’t Panic! Getting Real About AI Governance

Open Until: 08/25/2024

Amidst the rampant hype about AI (especially Generative AI), there is a real story about how AI systems can be used to buil...