Cloud 101CircleEventsBlog
Master CSA’s Security, Trust, Assurance, and Risk program—download the STAR Prep Kit for essential tools to enhance your assurance!

To Secure the AI Attack Surface, Start with Fundamental Cyber Hygiene

Published 10/10/2024

To Secure the AI Attack Surface, Start with Fundamental Cyber Hygiene

Originally published by Tenable.

Written by Lucas Tamagna-Darr.


Confusion and unknowns abound regarding the risks of AI applications. Many vendors are offering solutions to AI application security problems that aren't clearly defined. Here we explain that to boost AI application security and to protect against untrusted, third-party AI-enabled applications, you should start by focusing on basic, foundational cyber hygiene, adopt well-established best practices and enforce common-sense AI usage policies.


Introduction

The excitement around large language models (LLMs) and AI in general has triggered an inevitable rush to build the next great “powered by LLM” applications. As often happens, many questions have followed about the related risks and about how to secure these applications. Some aspects of securing LLM-powered applications are novel, involving serious complexities that must be carefully considered. Meanwhile, other aspects are similar to the risks that traditional vulnerability management programs have traditionally dealt with.

OWASP has ranked the top 10 risks for LLM applications:

  • Prompt Injection
  • Insecure Output Handling
  • Training Data Poisoning
  • Model DoS
  • Supply Chain Vulnerabilities
  • Sensitive Information Disclosure
  • Insecure Plugin Design
  • Excessive Agency
  • Overreliance
  • Model Theft


Novel challenges

Risks like prompt injection, sensitive-information disclosure and training-data poisoning are not necessarily new. However, extensive research is being conducted on the practicality of exploitation and the challenges of securing against those risks. For example, there have been prompt-injection attacks on language models where an attacker tricks the model into ignoring its instructions and disclosing information that it shouldn’t have disclosed. These types of attacks are technically achievable. However, because of the nature of LLMs, it’s not clear how practical these attacks are in the real world. Unlike a SQL injection attack where you can develop some degree of control of the data you get back, an LLM prompt-injection attack will always be subject to the behavior of the underlying model.

Securing an LLM application against prompt injection also involves unique challenges. With traditional command injection / SQL injection attacks, you have a lot of control over the input and can significantly constrain what a user inputs, as well as filter any unwanted input in pre-processing. LLMs allow more open-ended questions and responses, and potentially follow-up questions, so any pre-filtering or processing risks reduce the functionality of the feature. This inevitably becomes a balancing act between security and functionality. We’re likely to see improved attacks against prompts as well as better prompt-security tools .

Attackers and defenders face similar challenges in output handling and training-data poisoning. These risks should not be ignored and teams should be as defensive as possible, given the level of uncertainty and the complexity of the challenges. However, you can easily get bogged down addressing these risks while overlooking equally dangerous vulnerabilities that are also simpler to exploit and more familiar to defenders.


What to do - start with the basics

Two core risks associated with AI and LLM applications are supply-chain vulnerabilities and privacy violations. Both should be top of mind for applications developed in-house and for third-party applications. Fortunately these risks are better understood and can be easily incorporated into existing vulnerability management and cloud security programs.

Many of the major libraries that support AI and LLM applications have at least some high and critical severity vulnerabilities that have been disclosed. When developing applications, teams must have visibility into the libraries being used as well as into the vulnerabilities in those libraries. Vulnerability databases such as https://avidml.org/database/ as well as bug bounty programs such as https://huntr.com/ focus on AI-specific vulnerabilities. These can be valuable resources for teams that want to focus on mitigating any vulnerabilities in those libraries.

You can mitigate privacy risks by establishing and enforcing strong policies on the language models that engineering teams are permitted to use when building in-house applications. This is particularly important for any language models that are hosted by a third party. Additionally, you must have an inventory of all applications that employees are using which have LLM-powered features, as these can become data leakage channels. A search for the term “AI” in the Firefox browser-extension catalog returns over 2,000 results , while a search for the term “LLM” returns over 300 results. Many employees install these extensions without fully understanding their security and privacy implications. Employees also often aren’t aware that data they input may end up being passed to a third party or added to an LLM training dataset. Organizations should establish approved applications and browser extensions, forbidding employees from using any others that aren’t on the approved list.


Conclusion

LLMs and AI in general have left many security teams with unanswered questions about cyber risk management. We will inevitably see a rush of new products and features aimed at securing models and prompts, and at blocking malicious activity. However, you may want to see where you can consolidate tools and bolster existing strategies with AI-aware security solutions. Why? There’s still too much uncertainty around the reliability and practicality of attacks on LLMs as well as around the ability of tools to block those attacks without negating the value of LLMs. In the meantime, security teams can take concrete, well-understood and well-defined steps to reduce their risk and secure the attack surface. They can also establish strong corporate policies on the usage of LLMs and AI-powered applications, and conduct monitoring to ensure those policies are followed.



About the Author

In his role as a senior director of engineering and research solutions architect, Lucas Tamagna-Darr leads the automation and engineering functions of Tenable Research. Luke started out at Tenable developing plugins for Nessus and Nessus Network Monitor. He subsequently went on to lead several different functions within Tenable Research and now leverages his experience to help surface better content for customers across Tenable's products.

Share this content on your favorite social network today!