Defensive AI, Deepfakes, and the Rise of AGI: Cybersecurity Predictions and What to Expect in 2024
Published 01/04/2024
Originally published by Abnormal Security on November 30, 2023.
Written by Jade Hill.
There is no denying that AI has been the buzzword of 2023, as this year professionals and cybercriminals alike discovered how to use it to their advantage. And as we look into the new year, that is not likely to change. If anything, this technology will make cybercrime more sophisticated and easier to scale, creating an enormous need for the use of AI in cybersecurity.
We asked our experts to make their predictions for 2024 and explain how they anticipated both AI and cybersecurity would change in the coming year. Here’s what they said.
The democratization of AI will continue to drastically lower the barrier to entry for threat actors to launch attacks.
The growing ubiquity of AI is creating dangerous ripple effects that more cybersecurity professionals will be concerned with as we head into 2024. The widespread accessibility of generative AI tools brings immense productivity benefits, but this can be dangerous when they get into the hands of bad actors—effectively lowering the barrier to entry for attacks.
Even inexperienced and unskilled threat actors will now be able to write emails for phishing and business email compromise attacks more quickly and convincingly, scaling their attacks in both volume and sophistication. We started to see this trend in 2023, but I expect 2024 to see exponential growth as attackers crack the code on how to create and send attacks at lightning speed.
—Mike Britton, CISO
Greater volumes of AI-generated threats, including deepfakes, will create a surge in demand for validation techniques.
In the year ahead, bad actors will further learn how to tap into generative AI tools to create high-quality manipulations and sophisticated scams. We’ll also likely see an increase in the types of delivery techniques for these threats. For instance, in addition to AI-generated phishing emails, we’ll see a continued rise in vishing/voice phishing, smishing/SMS phishing, and quishing/QR code phishing attacks, as well as deepfakes for audio and video calls.
As deepfakes become more pervasive, I expect consumers will become increasingly discerning of the content they encounter on the Internet and will more proactively seek validation of authenticity. As a result, we may see a greater interest in and demand for validation techniques, including crowdsourced validation. Community Notes on Twitter/X is an excellent example of the kinds of tools I expect will pop up more, where people can crowdsource social and identity proof for various media shared on the platform.
—Dan Shiebler, Head of Machine Learning
The future of AI is nearly here, with AGI expected within the next five years.
The artificial general intelligence or AGI that has been a consistent storyline across science fiction is becoming a reality faster than expected. These systems that could learn to accomplish any intellectual task that human beings or animals can perform are no longer 20+ years away. There is no way to put the genie back in the bottle with AI now.
In the same way that people were surprised this year by how powerful ChatGPT is, it will be exponentially more transformational next year given the slope of improvements that are happening. As a result, AI singularity will happen very fast—changing the world as we know it, in both good ways and in bad. Organizations need to start preparing now to be ready to harness its enormous potential, while also ensuring they have a strategy in place to defend against growing AI-generated threats.
—Evan Reiser, CEO and Co-Founder
Security teams will increasingly seek AI-driven tools to protect their organizations but will need to discern which tools are AI-native and which have AI bolted on.
The cybersecurity talent shortage that we’ve seen over the past few years will persist into 2024. As a result, we’ll continue to see organizations bolster their lean teams with solutions that promote efficiency, are easy to maintain, and can help lower their overhead—turning most toward tooling that leverages AI.
However, the rapid boom in AI that we’ve seen over the last year and the surging demand created as a result will mean that many vendors will react (or already have reacted) by bolting AI onto their solutions—rather than developing new AI-native solutions. In 2024, I expect that we will see more companies (and thus more salespeople) claim that they offer AI capabilities within their products, even if that is not entirely true.
Moving forward, buyers should be discerning about these technologies’ true capabilities in the same way they were when companies claimed that they had moved their products entirely to the cloud.
—Mick Leach, Field CISO
Social engineering attacks will remain responsible for billions in losses, with federal grant funding becoming an increasingly attractive target.
Business email compromise (BEC) has long been the preferred method of attack, with $2.7 billion in losses in 2022 alone. This is unlikely to change in the coming years, but attackers may shift their focus to a specific and highly attractive target: federal grant funding.
For organizations that receive a portion of the trillion dollars in federal grant money, it’s an inspiring and newsworthy event. But because these are federal funds, information about them is publicly available online. Government agencies may publicize that they received funding, or information could be exposed through public board meetings or grant funding requests. With this information, cybercriminals can easily identify high-value targets and source information to launch targeted attacks with great success.
And while no one wants to become the next victim of BEC, these attacks will be particularly devastating when you consider that this money is often meant to fund initiatives such as affordable housing programs, substance use prevention and treatment programs, and STEM and workforce development programs.
—Zach Oxman, Head of SLED Development
Preparing for the Threats (and Opportunities) of 2024
Whether all of these predictions come to fruition or not, there is one trend that never goes out of style: organizations that take a proactive stance are better equipped to respond to emerging threats. And with AI changing the game at a rapid speed, security leaders must start preparing for an increased number of attacks with more sophistication than ever before.
There is no sign of a slowdown, and all signs point to one conclusion: to fight AI, we must use AI. Now is the time to discern what that actually means for your organization and take the necessary steps to ensure you’re prepared for what comes next.
Related Resources
Related Articles:
The Evolution of DevSecOps with AI
Published: 11/22/2024
CSA Community Spotlight: Nerding Out About Security with CISO Alexander Getsin
Published: 11/21/2024
AI-Powered Cybersecurity: Safeguarding the Media Industry
Published: 11/20/2024
5 Big Cybersecurity Laws You Need to Know About Ahead of 2025
Published: 11/20/2024