Cloud 101CircleEventsBlog
Master CSA’s Security, Trust, Assurance, and Risk program—download the STAR Prep Kit for essential tools to enhance your assurance!

3 Cybersecurity Threats Caused by Generative AI

Published 11/01/2023

3 Cybersecurity Threats Caused by Generative AI

Originally published by Abnormal Security.

Written by Jade Hill.

New technologies invite a spectrum of reactions. On the extreme ends are the people who, perhaps naively, think that novel tech will solve all humanity's problems or lead us to our collective doom. But reality is always more nuanced.

Take generative AI, for instance. Tools like ChatGPT and Bard are popular with individuals, businesses, and even bad actors to quickly create content. While some of these use cases are encouraging, there’s some anxiety about generative AI. In fact, 53% of IT professionals believe ChatGPT will be used this year to help hackers craft more believable and legitimate-sounding phishing emails. And the introduction of WormGPT has made this even more prevalent.

Unfortunately, email attacks powered by generative AI are already happening. And it’s easy to understand why. Generative AI tools enable cybercriminals to automate and scale their attacks, taking a task that used to require five minutes and turning it into five seconds. Making matters worse, ChatGPT and other tools make it nearly impossible to differentiate between real emails and malicious ones, as it eliminates the telltale signs of an attack like grammar mistakes and spelling errors.

As generative AI moves forward, it’s crucial to recognize the risks posed by malicious uses of it and how robust cybersecurity can make a difference. Here are a few ways that we’re already seeing generative AI be used in cyber attacks.


1. Credential Phishing

Phishing attacks come in all shapes and sizes, from generic, mass-blast emails to more focused spear phishing scams. The objective is to steal sign-in credentials and sensitive information. Unfortunately, generative AI makes phishing and similar email-based attacks much worse. Here’s how:

  • Ease of access: Tools like ChatGPT (and the malicious WormGPT) are free and simple to use. With a couple of prompts, these tools craft highly convincing and error-free scams.
  • Unlimited volume: Attackers can craft messages rapidly. And since many cyberattacks are a numbers game, generative AI gives attackers more opportunities to succeed.
  • Increased sophistication: ChatGPT upends how employees spot common red flags like typos, grammatical errors, and inappropriate tone. Additionally, ChatGPT handles multiple languages, including Spanish, Russian, Arabic, German, and Japanese, making it possible to create an attack in one language and send it out to employees in multiple countries.

All in all, generative AI enhances the realism and scale of phishing emails and corresponding landing pages. This increases the chances of tricking users into revealing sensitive information.


2. Business Email Compromise and Social Engineering

Business email compromise (BEC) is already the most financially-devastating cybercrime for businesses worldwide, resulting in more than $51 billion in losses since 2013. That’s the bad news. The worse news is that generative AI will only make BEC more effective and harder to detect.

Generative AI doesn’t just spit out generic email copy. With the right prompts and data, it can produce super convincing text copy. For example, a hacker can input specific information about their target, such as conversation history, to mimic the tone of a coworker or executive. This makes malicious messages more engaging and realistic, helping the attacker build trust with the target. By compromising the right account, a threat actor can take invoice information and conversation history for dozens (or even hundreds) of companies and craft extremely realistic BEC emails that convince the target to pay a fake invoice or update their billing account details to a bank account owned by the attacker.


3. Malware Creation and Endpoint Exploitation

Coding is a skill set that not everyone has. One of the promising aspects of generative AI is its ability to produce code that works with simple prompts—no expertise required. This is a boon for noncoders everywhere. Regrettably, it’s also a resource for the bad guys. This is why some cybersecurity experts call generative AI the next generation of script kiddies.

Generative AI can generate new malware variants, making it tough for traditional email security platforms to detect and block malicious software hidden in emails. This includes the possibility of self-mutating or polymorphic malware that evolves its codes or behavior, allowing it to evade detection and persist once it strikes its target.

Similarly, cyberattackers can use generative AI to find and exploit endpoint vulnerabilities. If a hacker discovers a software vulnerability, they can use generative AI to create commands for automated attack payloads. And once the attacker has infiltrated an endpoint, they can easily move to email and connected systems to steal data and other confidential information. With this in mind, it’s easy to see how generative AI could accelerate the security arms race between developers working to patch issues and attackers hoping to exploit them.


Stopping Email Attacks Generated by AI

To counter these high-volume and highly sophisticated email attacks, organizations must deploy a sophisticated email security platform; something that can handle the bad emails before they hit your mailboxes. If these AI-generated attack emails are well-adapted to trick employees, then adopting a proactive AI-powered defense is the perfect solution to this evolving risk.

A next-generation platform built on good AI to fight bad AI should include:

  • Behavioral Data Science Approach: Go beyond rules-based security with behavioral data science and AI to profile and baseline good behavior and detect anomalies. By using identity modeling, behavioral and relationship graphs, and deep content analysis, a next-generation email security platform can identify and stop suspicious emails—whether they’re created by AI or a human.
  • API Architecture and Integrations: Since Microsoft 365 and Google Workspace are popular cloud-based, workplace applications, you’ll want an API to provide access to the signals and data necessary for detecting suspicious activity. This includes unusual geolocations, dangerous IP addresses, changes in mail filter rules, unusual device logins, and more. More advanced solutions can also connect to other applications, including Slack, Okta, Zoom, and CrowdStrike, to understand identity and detect multi-channel attacks.
  • Organizational and Supply Chain Insights: Tracking vendor relationships is vital for your advanced email security platform. BEC attacks often leverage the goodwill between vendors and partners throughout the supply chain. Adopting a platform that understands cross-organizational relationships can impede these attacks and stop malicious activity from compromised accounts outside of your organization.

With these capabilities, sophisticated email security platforms detect anomalous behavior so that attacks powered by generative AI stop before they reach your mailboxes.

In the end, generative AI isn’t a miracle technology, nor is it a Terminator-type threat. It’s a tool with a lot of use cases—some legitimate and some nefarious. But in order to truly protect your employees from phishing scams, BEC, malware, and endpoint exploitation generated by AI, it’s necessary to utilize an AI-powered email security platform.

Share this content on your favorite social network today!