ChaptersEventsBlog

Download Publication

CSA Large Language Model (LLM) Threats Taxonomy
CSA Large Language Model (LLM) Threats Taxonomy

CSA Large Language Model (LLM) Threats Taxonomy

Release Date: 06/10/2024

Working Group: AI Safety

This document aims to align the industry by defining key terms related to Large Language Model (LLM) risks and threats. Establishing a common language reduces confusion, helps connect related concepts, and facilitates more precise dialogue across diverse groups. This common language will ultimately assist the advancement of Artificial Intelligence (AI) risk evaluation, AI control measures, and responsible AI governance. This taxonomy will also support additional research within the context of CSA’s AI Safety Initiative

Key Takeaways:
  • Define the assets that are essential for implementing and managing LLM/AI systems
  • Define the phases of the LLM lifecycle
  • Define potential LLM risks
  • Define the impact categories of LLM risks
Download this Resource

Bookmark
Share
Related resources
Data Security within AI Environments
Data Security within AI Environments
Introductory Guidance to AICM
Introductory Guidance to AICM
Capabilities-Based Risk Assessment (CBRA) for AI Systems
Capabilities-Based Risk Assessment (CBRA) for A...
AAGATE: A NIST AI RMF-Aligned Governance Platform for Agentic AI
AAGATE: A NIST AI RMF-Aligned Governance Platform for Agentic AI
Published: 12/22/2025
Agentic AI Security: New Dynamics, Trusted Foundations
Agentic AI Security: New Dynamics, Trusted Foundations
Published: 12/18/2025
AI Security Governance: Your Maturity Multiplier
AI Security Governance: Your Maturity Multiplier
Published: 12/18/2025
Deterministic AI vs. Generative AI: Why Precision Matters for Automated Security Fixes
Deterministic AI vs. Generative AI: Why Precision Matters for Autom...
Published: 12/17/2025
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Interested in helping develop research with CSA?

Related Certificates & Training