Other Practices Are Placing Greater Trust in AI... When Will Cybersecurity?
Published 02/22/2024
Originally published by Dazz.
Written by Noah Simon, Head of Product Marketing, Dazz.
In 2023, we saw AI adoption rates soar—particularly for large language learning models (LLMs). Many industries are now incorporating AI into common processes and are seeing positive results—and not just in cost savings from performing repeatable tasks that humans would. While there are still concerns about using AI to replace humans for critical decision-making, that barrier is starting to come down as well. AI is starting to surpass human accuracy in some areas.
AI in Healthcare
Take medical diagnoses. Google recently published research that claims AI was more effective in diagnosing respiratory and cardiovascular conditions than board-certified primary-care physicians. Note: the research has yet to be peer-reviewed and has yet to be applied to real patients with real symptoms.
Yet, this is still a prime example that shows the potential for AI to make critical decisions with a high degree of accuracy.
Autonomous Driving
Autonomous driving technology, which AI plays a key role in, is also making significant progress. A recent study of Waymo’s self-driving technology from insurance firm Swiss RE found that Waymo’s fully autonomous driving technology—significantly reduced the frequency of property damage claims by 76% when compared to human drivers. Furthermore, it eliminated bodily injury claims, a drastic contrast to the Swiss Re human driver baseline of 1.11 claims per million miles. The sample size studied from self-driving cars was significant: 3.8 million miles.
In other words, self-driving cars are quickly becoming safer than human divers.
Can AI be used for more than just detection in Cybersecurity?
There are advancing uses of AI in cybersecurity today, with the most common application being enhanced detection. For instance, AI is incredibly powerful at detecting attacks that haven’t been seen before. The NSA recently announced they are using AI to detect “living off the land” attacks - where attackers can compromise infrastructure without deploying malware.
While enhancing detection capabilities is no doubt a benefit—these detections are still being responded to and resolved by humans.
AI can drive more meaningful impact by automating remediation
Like autonomous driving technology, trust in using AI for security remediation needs to be built up over time. However, AI is already being used to streamline remediation by generating fixes when security vulnerabilities are found in code. These fixes can be easily applied to CI/CD models where the code is tested, and deployed if it runs successfully.
Today, these fixes still require human oversight and experts who understand the full context and nuances of how this code comprises the overall application, and how the application is deployed.
However, as we’re seeing with healthcare and autonomous driving, the time will come very soon when AI security fixes will become more reliable than human-generated fixes. The only question is: how soon will it happen?
Related Resources
Related Articles:
A Vulnerability Management Crisis: The Issues with CVE
Published: 11/21/2024
AI-Powered Cybersecurity: Safeguarding the Media Industry
Published: 11/20/2024
5 Big Cybersecurity Laws You Need to Know About Ahead of 2025
Published: 11/20/2024
Managing AI Risk: Three Essential Frameworks to Secure Your AI Systems
Published: 11/19/2024