Cloud 101CircleEventsBlog
Take the Understanding Data Risk Survey to help shape the future of data security!

Working Group

AI Controls

This committee aligns with NIST Cybersecurity Framework to establish a robust, flexible, and multi-layered framework.
CSA Large Language Model (LLM) Threats Taxonomy
CSA Large Language Model (LLM) Threats Taxonomy

Download

The AI Controls Committee is dedicated to defining and implementing a comprehensive set of security controls for AI systems. The committee aims to adhere to the top-level functions of the NIST Cybersecurity Framework (CSF): Identify, Protect, Detect, Respond, and Recover. By aligning with these proven guidelines, the committee seeks to establish a robust, flexible, and compliant framework that addresses security risks across multiple layers, from data to application.

Working Group Leadership

Marina Bregkou
Marina Bregkou

Marina Bregkou

Senior Research Analyst, CSA EMEA

Daniele Catteddu
Daniele Catteddu

Daniele Catteddu

Chief Technology Officer, CSA

Daniele Catteddu is an information security and risk management practitioner, technologies expert and privacy evangelist with over 15 of experience. He worked in several senior roles both in the private and public sector. He is member of various national and international security expert groups and committees on cyber-security and privacy, keynote speaker at several conferences and author of numerous studies and papers on risk management, ...

Read more

Working Group Co-Chairs

Marco Capotondi
Marco Capotondi

Marco Capotondi

Agency for National Cybersecurity, Italy

Marco Capotondi is an Engineer specialized in applied AI, with a focus on AI Security and AI applied to Autonomous Systems. Bachelor’s degree in Physics and Master’s degree in AI Engineering, he got a doctoral degree through a research around Bayesian Learning techniques applied to Autonomous Systems, on which he published many papers. His actual focus is helping the community to define and manage risks associated with Artificial Intelligen...

Read more

Siah Burke Headshot Missing
Siah Burke

Siah Burke

Ken Huang
Ken Huang

Ken Huang

Chief AI Officer at DistributedApps.ai

Ken Huang is an acclaimed author of 8 books on AI and Web3. He is the Co-Chair of the AI Organizational Responsibility Working Group and AI Control Framework at the Cloud Security Alliance. Additionally, Huang serves as Chief AI Officer of DistributedApps.ai, which provides training and consulting services for Generative AI Security.

In addition, Huang contributed extensively to key initiatives in the space. He is a core contributor t...

Read more

Alessandro Greco Headshot Missing
Alessandro Greco

Alessandro Greco

Publications in ReviewOpen Until
Shadow Access and AINov 17, 2024
Enterprise Authority To Operate (EATO) Auditing GuidelinesNov 18, 2024
Context-Based Access Control for Zero TrustNov 27, 2024
View all
Who can join?

Anyone can join a working group, whether you have years of experience or want to just participate as a fly on the wall.

What is the time commitment?

The time commitment for this group varies depending on the project. You can spend a 15 minutes helping review a publication that's nearly finished or help author a publication from start to finish.

Open Peer Reviews

Peer reviews allow security professionals from around the world to provide feedback on CSA research before it is published.

Learn how to participate in a peer review here.

Shadow Access and AI

Open Until: 11/17/2024

The document titled "Shadow Access and AI" explores the intricate relationship between Shadow Access and AI, highlighting t...

Enterprise Authority To Operate (EATO) Auditing Guidelines

Open Until: 11/18/2024

The CSA Enterprise Authority to Operate (EATO) Working Group has identified gaps within the understanding and implementa...

Context-Based Access Control for Zero Trust

Open Until: 11/27/2024

The document "Context-Based Access Control for Zero Trust" provides guidance on implementing context-based access control (...