AI is Now Exploiting Known Vulnerabilities - And What You Can Do About It
Published 06/26/2024
Originally published by Dazz.
In a recent study from the University of Illinois Urbana-Champaign (UIUC), researchers demonstrated the ability for Language Learning Models (LLMs) to exploit vulnerabilities simply by reading threat advisories. While some are arguing that the sample size was rather small (15 known vulnerabilities), this study still raises very important implications for vulnerability management and remediation programs: the stakes have clearly been raised.
Understanding the Study
The study's findings are based on the evaluation of GPT-4's comprehension and application of information contained in threat advisories. Researchers tasked GPT-4 with identifying and exploiting vulnerabilities based solely on the content of these advisories, without any additional context or guidance. Remarkably, the AI system was able to successfully exploit a wide range of vulnerabilities, demonstrating its sophisticated understanding of security concepts and its ability to apply them in practice.
Other precedents exist
Of course, this is not the first example or proof of concept of AI being used by attackers. LLMs have been used to carry out phishing attacks at a much greater scale, and hack websites. These examples show that LLMs can be trained quickly by attackers - and now become an attacker with only a few prompts.
What AI means for Vulnerability Management and Remediation
AI is moving at a rapid pace. Many ideas fueled by LLMs quickly move from proof-of-concept to reality in a matter of days.
It’s always been known that unpatched vulnerabilities are a cause for many breaches. A 2022 study showed that 60% of breaches analyzed were the result of a known, unpatched vulnerability that was exploited. Moreover, other studies show that breaches stemming from known vulnerabilities can be more costly than other attacks, such as phishing.
Fueled by LLMs, vulnerability exploitation has now just become faster and cheaper.
So what learnings should security leaders take away from this study?
- Vulnerability disclosure could change (for better or for worse): given this news, how should the industry approach industry disclosure? This will be another important aspect to the ongoing debate on how vulnerabilities should be disclosed. It’s possible that many in the community may decide to slow vulnerability disclosure, which could have unintended consequences.
- Reducing remediation time is more important than ever. The mean-time-to-remediate (MTTR) vulnerabilities at many companies is still too slow - and often exceeds service level agreements (SLA) that are defined by company stakeholders. With LLMs able to exploit known vulnerabilities by the contents of their disclosure alone, teams need to quickly understand the impact of any new vulnerability and in the case of vulnerabilities that pose critical risks - be able to formulate a remediation campaign in hours, not days.
- Security teams need to fight AI with AI: our CEO Merav Bahat recently told an audience at Fortune London that AI needs to be turned from a threat into an opportunity for security teams. While LLMs can be now used for vulnerability exploitation, they also can be used to make vulnerability prioritization and remediation faster, and more effective than ever before.
Related Resources
Related Articles:
CSA Community Spotlight: Nerding Out About Security with CISO Alexander Getsin
Published: 11/21/2024
A Vulnerability Management Crisis: The Issues with CVE
Published: 11/21/2024
Establishing an Always-Ready State with Continuous Controls Monitoring
Published: 11/21/2024
AI-Powered Cybersecurity: Safeguarding the Media Industry
Published: 11/20/2024