Download Publication
Who it's for:
- System engineers and architects
- Privacy and security professionals
Securing LLM Backed Systems: Essential Authorization Practices
Release Date: 08/13/2024
Organizations are increasingly leveraging Large Language Models (LLMs) to tackle diverse business problems. Both existing companies and a crop of new startups are vying for first-mover advantages. With this mass adoption of LLM-backed systems, there is a critical need for formal guidance on the secure design of them. Organizations especially need this guidance when an LLM must make decisions or utilize external data sources.
This document by the CSA AI Technology and Risk Working Group describes the LLM security risks relevant to system design. After exploring the pitfalls in authorization and security, it also outlines LLM design patterns for extending the capabilities of these systems. System designers can use this guidance to build systems that utilize the powerful flexibility of AI while remaining secure.
Key Takeaways:
- LLM security measures and best practices
- Authorization considerations for the various components of LLM-backed systems
- Security challenges and considerations for LLM-backed systems
- Common architecture design patterns for LLM-backed systems
Download this Resource
Are you a research volunteer? Request to have your profile displayed on the website here.
Interested in helping develop research with CSA?
Related Certificates & Training
Learn the core concepts, best practices and recommendation for securing an organization on the cloud regardless of the provider or platform. Covering all 14 domains from the CSA Security Guidance v4, recommendations from ENISA, and the Cloud Controls Matrix, you will come away understanding how to leverage information from CSA's vendor-neutral research to keep data secure on the cloud.
Learn more