Cloud 101CircleEventsBlog
Missed CSA's Cyber Monday sale? You can still get 50% off the CCSK + CCZT Exam & Training Bundle and Token Bundle with raincheck code 'rcdoubledip24'

Download Publication

Securing LLM Backed Systems: Essential Authorization Practices
Securing LLM Backed Systems: Essential Authorization Practices
Who it's for:
  • System engineers and architects
  • Privacy and security professionals

Securing LLM Backed Systems: Essential Authorization Practices

Release Date: 08/13/2024

Organizations are increasingly leveraging Large Language Models (LLMs) to tackle diverse business problems. Both existing companies and a crop of new startups are vying for first-mover advantages. With this mass adoption of LLM-backed systems, there is a critical need for formal guidance on the secure design of them. Organizations especially need this guidance when an LLM must make decisions or utilize external data sources.

This document by the CSA AI Technology and Risk Working Group describes the LLM security risks relevant to system design. After exploring the pitfalls in authorization and security, it also outlines LLM design patterns for extending the capabilities of these systems. System designers can use this guidance to build systems that utilize the powerful flexibility of AI while remaining secure.

Key Takeaways:
  • LLM security measures and best practices
  • Authorization considerations for the various components of LLM-backed systems
  • Security challenges and considerations for LLM-backed systems
  • Common architecture design patterns for LLM-backed systems
Download this Resource

Bookmark
Share
Related resources
AI Risk Management: Thinking Beyond Regulatory Boundaries
AI Risk Management: Thinking Beyond Regulatory ...
AI Organizational Responsibilities - Governance, Risk Management, Compliance and Cultural Aspects
AI Organizational Responsibilities - Governance...
AI in Medical Research: Applications & Considerations
AI in Medical Research: Applications & Consider...
The European Union Artificial Intelligence (AI) Act: Managing Security and Compliance Risk at the Technological Frontier
The European Union Artificial Intelligence (AI) Act: Managing Secur...
Published: 12/10/2024
From AI Agents to MultiAgent Systems: A Capability Framework
From AI Agents to MultiAgent Systems: A Capability Framework
Published: 12/09/2024
AI-Enhanced Penetration Testing: Redefining Red Team Operations
AI-Enhanced Penetration Testing: Redefining Red Team Operations
Published: 12/06/2024
AI in Cybersecurity - The Double-Edged Sword
AI in Cybersecurity - The Double-Edged Sword
Published: 11/27/2024
Cloudbytes Webinar Series
Cloudbytes Webinar Series
January 1 | Online

Acknowledgements

Michael Roza
Michael Roza
Risk, Audit, Control and Compliance Professional at EVC

Michael Roza

Risk, Audit, Control and Compliance Professional at EVC

Since 2012, Michael Roza has been a pivotal member of the Cloud Security Alliance (CSA) family. He has contributed to over 125 projects, as a Lead Author or Author/Contributor and many more as a Reviewer/Editor.

Michael's extensive contributions encompass critical areas including Artificial Intelligence, Zero Trust/Software Defined Perimeter, Internet of Things, Top Threats, Cloud Control Matrix, DevSecOps, and Key Management. His lea...

Read more

Are you a research volunteer? Request to have your profile displayed on the website here.

Interested in helping develop research with CSA?

Related Certificates & Training