Cloud 101CircleEventsBlog
Join AT&T Cybersecurity in Chicago to learn top 2024 resilience tactics on May 21st!

Three Ways Cybercriminals Could Use AI to Harm Your Organization

Three Ways Cybercriminals Could Use AI to Harm Your Organization

Blog Article Published: 07/18/2023

Originally published by ThreatLocker.


Introduction

The introduction of Generative Artificial Intelligence (AI) tools has elevated the way individuals streamline their day-to-day tasks. AI has proven to be a groundbreaking development in human efficiency and the way people create, structure, and build upon their lives in and out of work. To put how big of a phenomenon this is into perspective, TIME reported how ChatGPT had already gained 100 million active users in just two months, which is astronomical when compared to the two and a half years it took Instagram to accumulate the same user count. However, these same tools now also lie in the hands of users with malicious intentions, making it much more difficult for IT professionals to stay ahead of cybercriminals. In this blog, we will be covering three ways in which cybercriminals could use AI tools to harm your organization.


Phishing Attacks

The user element of your cybersecurity strategy is an essential component. You rely on users' ability to identify and neutralize phishing attempts at any moment, which is why phishing awareness training has become a requirement among most compliance regulations. Unfortunately, AI tools can deliver phishing techniques that give current phishing identification strategies, like the S.L.A.M. method, a run for their money. These AI tools can search the internet for phishing identification techniques and construct convincing copy to trick users within your organization. The grammar could be 100% correct, the false stories could be more believable, and even those who the emails are sent to could be more targeted based on the statistics and demographics the AI could research in a matter of seconds. These drastic changes can escalate phishing attempts to targeted spear phishing attacks, produced at a greater quantity, faster.

Phishing attempts come in countless forms. You may most frequently see one asking an employee to purchase gift cards and share the codes, but you may also see one that attaches a PDF, Word Document, or other item. When opened on your endpoints, these attachments can initialize, or run, malicious scripts meant to corrupt your organization’s infrastructure, exfiltrate or encrypt data. Not limited to merely disrupting operations, the bad actors can then demand a ransom for the remedy to mitigate the script. In most cases, however, nothing is done once you pay the ransom. You lose your cash and your data.


How to Mitigate Phishing Attacks

Phishing attacks are unpredictable; they can happen just under your nose and can cause your organization to come crumbling down in less than a day. One of the best ways you can prevent an org-destroying cyberattack led by phishing is to implement controls in your environment to stop cyberattacks before they can happen. Implementing Multi-Factor Authentication (MFA) protocols in your organization is the best way to mitigate a phishing attack. However, if constraints prevent the implementation of MFA, investing in a robust application whitelisting and/or containment tool could be how you protect your organization and its valuable intellectual property from being stolen by phishing attacks.


Malicious Scripts Are Easier to Write Than Ever With AI

Countless IT professionals and hobbyists are utilizing AI tools to construct scripts and code where they are having difficulties putting pieces together. However, these same tools are being used by threat actors for the same purposes, except for potentially harming your organization. To make matters worse, there are individuals that do not carry the skills to create complex malware but are capable of commanding Generative AI tools to write scripts from scratch. Without any experience whatsoever, they can just input a prompt, and in a matter of seconds, they are presented with a company-killing script, increasing the number of threat actors out in the world.

Fortunately, organizations like OpenAI are working towards restricting what their products can produce. For example, ChatGPT is showing signs of understanding how to recognize when a request is malicious, as shown by Rob Allen and Jimmy Hatzell in a recent webinar between ThreatLocker and CyberQP. However, it still has a way to go. As demonstrated by Rob and Jimmy, it is easy for anyone to create loopholes in their requests and create malware for data exfiltration. In the full video, you can witness the duo create a script that weaponizes the trusted applications on your endpoint to export your data to another location. This demonstration shows that although threat actors are using AI tools like ChatGPT to design scripts for malware, OpenAI understands the full spectrum of “good vs bad” in which their tool is being used and is actively trying to stop cybercriminals from exploiting ChatGPT’s capabilities.


Mitigate the Weaponization of Trusted Applications

Just because an application is on your Allowlist, doesn’t mean you are 100% safe from threat actors. Cyberattacks like living off the land (LOTL) attacks weaponize the trusted or allowed applications within your environment to distribute malware like ransomware and data exfiltration scripts. You can mitigate the threat of your trusted applications becoming weaponized by implementing a Zero Trust control tool that limits how each trusted application can interact with other software within your environment. These containment tools work as an excellent second line of defense when paired with whitelisting tools mentioned above.


Enhancing Existing Malware

What’s nice about generative AI tools is their ability to deconstruct malware scripts. Any user can input a script and prompt the AI tool to “tell them what this does.” This capability can alert IT professionals that the “trusted” scripts shared with them are not as friendly as they believed. However, threat actors can also use this to their advantage by asking the same AI tools the same question, then prompting them to examine and alter the coding. Through reverse engineering, generative AI tools can edit these scripts and turn them into smarter, more destructive code that can identify and bypass modern security measures, such as firewalls and detection and response tools, and their unknown vulnerabilities. You won’t know the malware is in your organization until it is too late!


Mitigate the Threat of Enhanced Malware

Modern detection tools are great additions to your cybersecurity strategy, but as mentioned in the name, they detect what is already in your system. As mentioned before, though, malware is becoming extremely advanced and is able to bypass the policies that would normally detect them. It is best to pair a detection and response tool with Zero Trust control tools. This allows for the control tools to prevent a cyberattack and allow the detection and response tool to notify you of its occurrence.


Conclusion

Generative AI tools are still very new, and there is immense potential for them to change the way you live your life, conduct business, and generate competitive copy and strategies. What has been discovered thus far is only the tip of the iceberg, which is why IT professionals like Roy Richardson, VP and CSO of Aurora InfoTech, are predicting that AI will be one of the biggest challenges to overcome in the near future.

Share this content on your favorite social network today!