Download Publication
CSA Large Language Model (LLM) Threats Taxonomy
Release Date: 06/10/2024
Working Group: AI Safety Initiative
This document aims to align the industry by defining key terms related to Large Language Model (LLM) risks and threats. Establishing a common language reduces confusion, helps connect related concepts, and facilitates more precise dialogue across diverse groups. This common language will ultimately assist the advancement of Artificial Intelligence (AI) risk evaluation, AI control measures, and responsible AI governance. This taxonomy will also support additional research within the context of CSA’s AI Safety Initiative.
Key Takeaways:
- Define the assets that are essential for implementing and managing LLM/AI systems
- Define the phases of the LLM lifecycle
- Define potential LLM risks
- Define the impact categories of LLM risks
Download this Resource
Acknowledgements
Dennis Xu
VP Analyst @ Gartner
Dennis Xu
VP Analyst @ Gartner
Rakesh Sharma
Security Architect
Rakesh Sharma
Security Architect
Are you a research volunteer? Request to have your profile displayed on the website here.
Interested in helping develop research with CSA?
Related Certificates & Training
Learn the core concepts, best practices and recommendation for securing an organization on the cloud regardless of the provider or platform. Covering all 14 domains from the CSA Security Guidance v4, recommendations from ENISA, and the Cloud Controls Matrix, you will come away understanding how to leverage information from CSA's vendor-neutral research to keep data secure on the cloud.
Learn more