ChaptersEventsBlog
Join CSA’s AI Safety Working Group kickoff—shape the future of secure, trustworthy AI.

Download Publication

Securing LLM Backed Systems: Essential Authorization Practices
Securing LLM Backed Systems: Essential Authorization Practices
Who it's for:
  • System engineers and architects
  • Privacy and security professionals

Securing LLM Backed Systems: Essential Authorization Practices

Release Date: 08/13/2024

Updated On: 07/16/2025

Organizations are increasingly leveraging Large Language Models (LLMs) to tackle diverse business problems. Both existing companies and a crop of new startups are vying for first-mover advantages. With this mass adoption of LLM-backed systems, there is a critical need for formal guidance on the secure design of them. Organizations especially need this guidance when an LLM must make decisions or utilize external data sources.

This document by the CSA AI Technology and Risk Working Group describes the LLM security risks relevant to system design. After exploring the pitfalls in authorization and security, it also outlines LLM design patterns for extending the capabilities of these systems. System designers can use this guidance to build systems that utilize the powerful flexibility of AI while remaining secure.

Key Takeaways:
  • LLM security measures and best practices
  • Authorization considerations for the various components of LLM-backed systems
  • Security challenges and considerations for LLM-backed systems
  • Common architecture design patterns for LLM-backed systems
Download this Resource

Bookmark
Share
Related resources
Analyzing Log Data with AI Models
Analyzing Log Data with AI Models
Agentic AI Identity and Access Management: A New Approach
Agentic AI Identity and Access Management: A Ne...
Secure Agentic System Design: A Trait-Based Approach
Secure Agentic System Design: A Trait-Based App...
Fortifying the Agentic Web: A Unified Zero Trust Architecture Against Logic-Layer Threats
Fortifying the Agentic Web: A Unified Zero Trust Architecture Again...
Published: 09/12/2025
The Hidden Security Threats Lurking in Your Machine Learning Pipeline
The Hidden Security Threats Lurking in Your Machine Learning Pipeline
Published: 09/11/2025
From Policy to Prediction: The Role of Explainable AI in Zero Trust Cloud Security
From Policy to Prediction: The Role of Explainable AI in Zero Trust...
Published: 09/10/2025
API Security in the AI Era
API Security in the AI Era
Published: 09/09/2025
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Virtual

Acknowledgements

Michael Roza
Michael Roza
Risk, Audit, Control and Compliance Professional at EVC

Michael Roza

Risk, Audit, Control and Compliance Professional at EVC

Michael Roza is a seasoned risk, audit, control and compliance, and cybersecurity professional with over 20 years of experience across multinational enterprises and startups. As a Cloud Security Alliance (CSA) Research member for over 10 years, he has led and contributed to more than 140 CSA projects spanning Zero Trust, AI, IoT, Top Threats, DecSecOps, Cloud Key Management, Cloud Control Matrix, and many others.

He has co-chaired...

Read more

Are you a research volunteer? Request to have your profile displayed on the website here.

Interested in helping develop research with CSA?

Related Certificates & Training