Cloud 101CircleEventsBlog
Register for CSA's Virtual AI Summit to discover AI's business impact, tackle security challenges, and ensure compliance with evolving regulations.

The Rise of Malicious AI: 5 Key Insights from an Ethical Hacker

Published 01/03/2025

The Rise of Malicious AI: 5 Key Insights from an Ethical Hacker

Originally published by Abnormal Security.

Written by Jade Hill.


Artificial intelligence has become prevalent in nearly every industry worldwide over the last two years, and cybercrime is no exception. While the cybersecurity industry is focused on how to use AI to stop bad actors, those cybercriminals were trying to defend against are innovating even faster—often using AI to supercharge their attacks and make them more sophisticated than ever before.

To understand more about how attackers are using this innovative technology, we worked with an ethical hacker. FreakyClown, known today as FC, provided us with some key insights in his most recent white paper, titled The Rise, Use, and Future of Malicious Al: A Hacker's Insight. Here are some of the lessons from his deep dive into this new world of cybercrime.


1. Malicious AI Technology is Readily Available

The increasing accessibility of AI technologies and frameworks combined with the knowledge transfer has led to an explosion of malicious tools and AI models. While commercial AI tools available to the public (like ChatGPT and CoPilot) have built-in safety systems and controls in place, cybercriminals are now creating their own versions, such as FraudGPT, PoisonGPT, VirusGPT and EvilGPT—each name inspired by their niche intended use.

As the dark web becomes flooded with new malicious tools and open-source AI models are being de-censored, criminals can utilize them for malicious activities. The backbone behind any AI model is the dataset, and whilst commercial ones do not allow you to ingest your own data into them, the criminalized versions do. This makes them not only more capable of creating attacks, but also in defining their target data.


2. AI-Enhanced Malware is Here

In 1980, the first malware that replicated itself and moved through a network, the Morris Worm, was released. Earlier this year, we saw the Morris Worm 2.0, which targets generative AI systems. Like traditional worms, it steals data and deploys malware, but it does so by manipulating the AI prompts to bypass security measures and replicating itself across different platforms. It was created by researchers as a proof of concept, but the creation of this tool is an example of the kind of advanced malware research and development that we are likely to see—both by criminals and legitimate security researchers. Within the next few years malware will have advanced techniques built in that will allow it to recognize the system it is in and morph itself to defend against, or even avoid, current detection systems.


3. Deepfakes are Deeply Troubling

In addition to malware, the introduction of generative AI tools has led to a substantial rise of impersonation attacks. These tools have enabled criminals to use digital twins and face-swapping technologies, adding far more sophistication to their more traditional scamming techniques. In February 2024, a finance worker in Hong Kong was tricked into paying $26 million to fraudsters posing as the multinational firm’s chief financial officer, and these tools are being used today to further political agendas.


4. AI is Leading to Increased Cybercrime-as-a-Service

With the rise of AI-enhanced hacking tools, it has become even easier for lesser-skilled criminals to start dabbling into the hacking side themselves. As with all tools, AI systems developed legally and legitimately can, unfortunately, be subverted by criminals for malicious purposes. Take, for example, the AI-powered tool Nebula by BerylliumSec, which is effectively an assistant for hackers, who can interact with the computer using natural language—making it possible for hackers to use it to do the heavy lifting of commands and execution to target vulnerable people and organizations.


5. AI is Needed to Stop AI

The rise of the malicious use of AI presents significant challenges to cybersecurity. Understanding the methods, impacts, and defense mechanisms is crucial for mitigating these threats. And so to protect themselves and their employees, organizations must proactively adopt AI-driven defense measures, collaborate on threat intelligence, and continuously educate their workforce to stay ahead of the malicious use of AI. It must be remembered that whilst the rise of AI is a force multiplier for threat actors, it is also a force multiplier for those defending against those attacks. Together, with the right tools, we can stay safe from these attacks.


Moving Into an AI-Powered Future

As AI becomes more integrated into every tool we use, the lines between traditional and AI-driven cyber attacks will blur. Regardless, the takeaway is clear: AI is both the problem and the solution. The key will be staying ahead of the curve, continuously adapting defense mechanisms, and fostering collaboration across both industries and borders.

Share this content on your favorite social network today!