ChaptersEventsBlog
Register now for the December 10 session on redefining cloud security in the era of AI and automation.

Download Publication

Securing LLM Backed Systems: Essential Authorization Practices
Securing LLM Backed Systems: Essential Authorization Practices
Who it's for:
  • System engineers and architects
  • Privacy and security professionals

Securing LLM Backed Systems: Essential Authorization Practices

Release Date: 08/13/2024

Organizations are increasingly leveraging Large Language Models (LLMs) to tackle diverse business problems. Both existing companies and a crop of new startups are vying for first-mover advantages. With this mass adoption of LLM-backed systems, there is a critical need for formal guidance on the secure design of them. Organizations especially need this guidance when an LLM must make decisions or utilize external data sources.

This document by the CSA AI Technology and Risk Working Group describes the LLM security risks relevant to system design. After exploring the pitfalls in authorization and security, it also outlines LLM design patterns for extending the capabilities of these systems. System designers can use this guidance to build systems that utilize the powerful flexibility of AI while remaining secure.

Key Takeaways:
  • LLM security measures and best practices
  • Authorization considerations for the various components of LLM-backed systems
  • Security challenges and considerations for LLM-backed systems
  • Common architecture design patterns for LLM-backed systems
Download this Resource

Bookmark
Share
Related resources
Data Security within AI Environments
Data Security within AI Environments
Introductory Guidance to AICM
Introductory Guidance to AICM
Capabilities-Based Risk Assessment (CBRA) for AI Systems
Capabilities-Based Risk Assessment (CBRA) for A...
Security for AI Building, Not Security for AI Buildings
Security for AI Building, Not Security for AI Buildings
Published: 12/09/2025
AI Explainability Scorecard
AI Explainability Scorecard
Published: 12/08/2025
Why AI Won't Replace Us: The Critical Role of Human Oversight in AI-Driven Workflows
Why AI Won't Replace Us: The Critical Role of Human Oversight in AI...
Published: 12/03/2025
Navigating the Liminal Edge of AI Security: Deconstructing Prompt Injection, Model Poisoning, and Adversarial Perturbations in the Cognitive Cyber Domain
Navigating the Liminal Edge of AI Security: Deconstructing Prompt I...
Published: 12/01/2025
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Interested in helping develop research with CSA?

Related Certificates & Training