Cloud 101CircleEventsBlog
Help shape the future of cloud security! Take our quick survey on SaaS Security and AI.

Download Publication

Securing LLM Backed Systems: Essential Authorization Practices
Securing LLM Backed Systems: Essential Authorization Practices
Who it's for:
  • System engineers and architects
  • Privacy and security professionals

Securing LLM Backed Systems: Essential Authorization Practices

Release Date: 08/13/2024

Organizations are increasingly leveraging Large Language Models (LLMs) to tackle diverse business problems. Both existing companies and a crop of new startups are vying for first-mover advantages. With this mass adoption of LLM-backed systems, there is a critical need for formal guidance on the secure design of them. Organizations especially need this guidance when an LLM must make decisions or utilize external data sources.

This document by the CSA AI Technology and Risk Working Group describes the LLM security risks relevant to system design. After exploring the pitfalls in authorization and security, it also outlines LLM design patterns for extending the capabilities of these systems. System designers can use this guidance to build systems that utilize the powerful flexibility of AI while remaining secure.

Key Takeaways:
  • LLM security measures and best practices
  • Authorization considerations for the various components of LLM-backed systems
  • Security challenges and considerations for LLM-backed systems
  • Common architecture design patterns for LLM-backed systems
Download this Resource

Bookmark
Share
Related resources
Using AI for Offensive Security
Using AI for Offensive Security
AI Model Risk Management Framework
AI Model Risk Management Framework
CSA Large Language Model (LLM) Threats Taxonomy
CSA Large Language Model (LLM) Threats Taxonomy
A Step-by-Step Guide to Improving Large Language Model Security
A Step-by-Step Guide to Improving Large Language Model Security
Published: 09/10/2024
AI Regulations on the Horizon: Transforming Corporate Governance and Cybersecurity
AI Regulations on the Horizon: Transforming Corporate Governance an...
Published: 09/10/2024
Pioneering Transparency: Oklahoma’s Proposed Artificial Intelligence Bill of Rights
Pioneering Transparency: Oklahoma’s Proposed Artificial Intelligenc...
Published: 09/06/2024
Mechanistic Interpretability 101
Mechanistic Interpretability 101
Published: 09/05/2024
SECtember.ai Global
SECtember.ai Global
September 26 | Virtual
Are you a research volunteer? Request to have your profile displayed on the website here.

Interested in helping develop research with CSA?

Related Certificates & Training