Cloud 101CircleEventsBlog
Participate in the CSA Top Threats to Cloud Computing 2025 peer review to help shape industry insights!
Something went wrong fetching the event listings.
×

Working Group

AI Controls

This committee aligns with NIST Cybersecurity Framework to establish a robust, flexible, and multi-layered framework.
View Current Projects
CSA Large Language Model (LLM) Threats Taxonomy
CSA Large Language Model (LLM) Threats Taxonomy

Download

The CSA AI Control Framework Working Group’s goal is to define a framework of control objectives to support organizations in their secure and responsible development, management, and use of AI technologies. The framework will assist in evaluating risks and defining controls related to Generative AI (GenAI). The control objectives will cover aspects related to cybersecurity. Additionally, it will cover aspects related to safety, privacy, transparency, accountability, and explainability as far as they relate to cybersecurity.

Working Group Leadership

Marina Bregkou
Marina Bregkou

Marina Bregkou

Principal Research Analyst, Associate VP

Daniele Catteddu
Daniele Catteddu

Daniele Catteddu

Chief Technology Officer, CSA

Daniele Catteddu is an information security and risk management practitioner, technologies expert and privacy evangelist with over 15 of experience. He worked in several senior roles both in the private and public sector. He is member of various national and international security expert groups and committees on cyber-security and privacy, keynote speaker at several conferences and author of numerous studies and papers on risk management, ...

Read more

Working Group Co-Chairs

Marco Capotondi
Marco Capotondi

Marco Capotondi

Agency for National Cybersecurity, Italy

Marco Capotondi is an Engineer specialized in applied AI, with a focus on AI Security and AI applied to Autonomous Systems. Bachelor’s degree in Physics and Master’s degree in AI Engineering, he got a doctoral degree through a research around Bayesian Learning techniques applied to Autonomous Systems, on which he published many papers. His actual focus is helping the community to define and manage risks associated with Artificial Intelligen...

Read more

Siah Burke Headshot Missing
Siah Burke

Siah Burke

Ken Huang
Ken Huang

Ken Huang

Chief AI Officer at DistributedApps.ai

Ken Huang is an acclaimed author of 8 books on AI and Web3. He is the Co-Chair of the AI Organizational Responsibility Working Group and AI Control Framework at the Cloud Security Alliance. Additionally, Huang serves as Chief AI Officer of DistributedApps.ai, which provides training and consulting services for Generative AI Security.

In addition, Huang contributed extensively to key initiatives in the space. He is a core contributor t...

Read more

Alessandro Greco Headshot Missing
Alessandro Greco

Alessandro Greco

Publications in ReviewOpen Until
AICM mapping to BSI AI C4 CatalogMar 24, 2025
CCMv4.0 Mapping to HITRUST CSF v11.3Mar 25, 2025
SaaS Technical Controls Final DraftApr 03, 2025
The State of Data Privacy EngineeringApr 12, 2025
View all
Who can join?

Anyone can join a working group, whether you have years of experience or want to just participate as a fly on the wall.

What is the time commitment?

The time commitment for this group varies depending on the project. You can spend a 15 minutes helping review a publication that's nearly finished or help author a publication from start to finish.

Virtual Meetings

Attend our next meeting. You can just listen in to decide if this group is a good for you or you can choose to actively participate. During these calls we discuss current projects, and well as share ideas for new projects. This is a good way to meet the other members of the group. You can view all research meetings here.

No scheduled meetings for this working group in the next 60 days.

See Full Calendar for this Working Group

Open Peer Reviews

Peer reviews allow security professionals from around the world to provide feedback on CSA research before it is published.

Learn how to participate in a peer review here.

AICM mapping to BSI AI C4 Catalog

Open Until: 03/24/2025

The AICM to BSI AI C4 Mapping initiative aims to ensure a comprehensive alignment between the CSA AI Controls Matrix (AICM)...

CCMv4.0 Mapping to HITRUST CSF v11.3

Open Until: 03/25/2025

The Cloud Security Alliance (CSA), would like to announce an additional ma...

SaaS Technical Controls Final Draft

Open Until: 04/03/2025

The Cloud Security Alliance (CSA), in collaboration with MongoDB, GuidePoint Security, and the SaaS Working Group, is invit...

The State of Data Privacy Engineering

Open Until: 04/12/2025

This paper provides a comprehensive overview of Data Privacy Engineering (DPE), its importance, and its application in toda...

Premier AI Safety Ambassadors

Premier AI Safety Ambassadors play a leading role in promoting AI safety within their organization, advocating for responsible AI practices and promoting pragmatic solutions to manage AI risks. Contact [email protected] to learn how your organization could participate and take a seat at the forefront of AI safety best practices.