This document aims to align the industry by defining key terms related to Large Language Model (LLM) risks and threats. Establishing a common language reduces confusion, helps connect related concepts, and facilitates more precise dialogue across diverse groups. This common language will ultimately assist the advancement of Artificial Intelligence (AI) risk evaluation, AI control measures, and responsible AI governance. This taxonomy will also support additional research within the context of CSA’s AI Safety Initiative.
Key Takeaways:
- Define the assets that are essential for implementing and managing LLM/AI systems
- Define the phases of the LLM lifecycle
- Define potential LLM risks
- Define the impact categories of LLM risks
Topics:
Premier AI Safety Ambassadors

Premier AI Safety Ambassadors play a leading role in promoting AI safety within their organization, advocating for responsible AI practices and promoting pragmatic solutions to manage AI risks. Learn more about how your organization could participate and take a seat at the forefront of AI safety best practices.




