ChaptersEventsBlog

Download Publication

CSA Large Language Model (LLM) Threats Taxonomy
CSA Large Language Model (LLM) Threats Taxonomy

CSA Large Language Model (LLM) Threats Taxonomy

Release Date: 06/10/2024

Working Group: AI Safety

This document aims to align the industry by defining key terms related to Large Language Model (LLM) risks and threats. Establishing a common language reduces confusion, helps connect related concepts, and facilitates more precise dialogue across diverse groups. This common language will ultimately assist the advancement of Artificial Intelligence (AI) risk evaluation, AI control measures, and responsible AI governance. This taxonomy will also support additional research within the context of CSA’s AI Safety Initiative

Key Takeaways:
  • Define the assets that are essential for implementing and managing LLM/AI systems
  • Define the phases of the LLM lifecycle
  • Define potential LLM risks
  • Define the impact categories of LLM risks
Download this Resource

Bookmark
Share
Related resources
Data Security within AI Environments
Data Security within AI Environments
Introductory Guidance to AICM
Introductory Guidance to AICM
Capabilities-Based Risk Assessment (CBRA) for AI Systems
Capabilities-Based Risk Assessment (CBRA) for A...
What AI Risks Are Hiding in Your Apps?
What AI Risks Are Hiding in Your Apps?
Published: 01/16/2026
My Top 10 Predictions for Agentic AI in 2026
My Top 10 Predictions for Agentic AI in 2026
Published: 01/16/2026
Cloud 2026: The Shift to AI Driven, Sovereign and Hyperconnected Digital Ecosystems
Cloud 2026: The Shift to AI Driven, Sovereign and Hyperconnected Di...
Published: 01/15/2026
Best Practices to Achieve the Benefits of Agentic AI in Pentesting
Best Practices to Achieve the Benefits of Agentic AI in Pentesting
Published: 01/13/2026
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Interested in helping develop research with CSA?

Related Certificates & Training