Cloud 101CircleEventsBlog
Call for Presentations: Share your expertise at SECtember.ai 2024! Submit your proposals by June 28th.

Working Group

AI Controls

This committee aligns with NIST Cybersecurity Framework to establish a robust, flexible, and multi-layered framework.
CSA Large Language Model (LLM) Threats Taxonomy
CSA Large Language Model (LLM) Threats Taxonomy

Download

The AI Controls Committee is dedicated to defining and implementing a comprehensive set of security controls for AI systems. The committee aims to adhere to the top-level functions of the NIST Cybersecurity Framework (CSF): Identify, Protect, Detect, Respond, and Recover. By aligning with these proven guidelines, the committee seeks to establish a robust, flexible, and compliant framework that addresses security risks across multiple layers, from data to application.

Working Group Leadership

Marina Bregkou
Marina Bregkou

Marina Bregkou

Senior Research Analyst, CSA EMEA

Sean Heide
Sean Heide

Sean Heide

Technical Research Director, CSA

Publications in ReviewOpen Until
Zero Trust Guidance for Critical InfrastructureJul 11, 2024
Authorization Best Practices for Systems using LLMsJul 12, 2024
Using AI for Offensive SecurityJul 12, 2024
View all
Who can join?

Anyone can join a working group, whether you have years of experience or want to just participate as a fly on the wall.

What is the time commitment?

The time commitment for this group varies depending on the project. You can spend a 15 minutes helping review a publication that's nearly finished or help author a publication from start to finish.

Open Peer Reviews

Peer reviews allow security professionals from around the world to provide feedback on CSA research before it is published.

Learn how to participate in a peer review here.

Zero Trust Guidance for Critical Infrastructure

Open Until: 07/11/2024

The goal of this paper is to educate the target audience on considerations and application of Zero Trust principles for Cri...

Authorization Best Practices for Systems using LLMs

Open Until: 07/12/2024

This document targets engineers, architects, and security professionals, providing an understanding of the specific risks a...

Using AI for Offensive Security

Open Until: 07/12/2024

The emergence of Artificial Intelligence (AI) technology, particularly Large Language Models (LLMs) and AI Agents, has trig...