Cloud 101CircleEventsBlog
Save the date for CSA's 2024 Cyber Monday Sale: Get 50% off the exam token bundle!

The Future Role of AI in Cybersecurity

Published 03/11/2024

The Future Role of AI in Cybersecurity

Originally published by DigiCert.

Written by Dr. Avesta Hojjati.

With an estimated market size of $102 billion by 2032, it’s no secret that Artificial intelligence (AI) is taking every industry by storm. We all know the basic idea of AI – it’s like creating really clever computers by showing them lots of pictures, telling them what's in the pictures, and letting them learn from those pictures so they can figure things out on their own.

However, AI requires data, and where that data comes from, how it is processed and what comes out of those processes will require a sense of identity and security. Understandably, many people are concerned about the security of that data. A 2023 survey found that 81% of respondents are concerned about the security risks associated with ChatGPT and generative AI, while only 7% were optimistic that AI tools would enhance internet safety. Thus, strong cybersecurity measures will be even more critical with AI technologies.

But there are also myriad opportunities to apply AI in cybersecurity to improve threat detection, prevention and incident response. Thus, companies need to understand the opportunities and weaknesses of AI in cybersecurity to stay ahead of forthcoming threats. In today’s post, I’m diving into key things companies need to know when exploring adopting AI in cybersecurity and how to protect against emerging threats in AI.


How AI can be used to strengthen cybersecurity

On the bright side, AI can help transform cybersecurity with more effective, accurate and quicker responses. Some of the ways AI can be applied to cybersecurity include:

  • Pattern recognition to reduce false positives: AI is great at pattern recognition, which means it can better detect anomalies and provide a behavior analysis and detect threats in real-time. In fact, a Ponemon Institute study in 2022 found that organizations using AI-driven intrusion detection systems experienced a 43% reduction in false positives, allowing security teams to focus on genuine threats. Additionally, AI-powered email security solutions were shown to reduce false positives by up to 70%.
  • Enable scale by enhancing human capabilities: AI can be used to enhance human capabilities, provide a faster response time and offer scalability. The only limitation to scale will be the availability of data. Additionally, AI chatbots can be used as virtual assistants to offer security support and take some of the burden off human agents.
  • Speed up incident response and recovery: AI can automate actions and routine tasks based on previous training and multipoint data collection, offer faster response times and reduced detection gaps. AI can also automate reporting, offering insights through natural language queries, simplifying security systems, and providing recommendations to enhance future cybersecurity strategies.
  • Sandbox phishing training: Generative AI can create realistic phishing scenarios for hands-on cybersecurity training, fostering a culture of vigilance among employees and preparing employees for real-world threats.


Potential threats of AI to data security

We are already seeing attackers use AI in attacks. For instance:

  • AI-automate malware campaigns: Cybercriminals can employ generative AI to craft sophisticated malware that adjusts its code or behavior to avoid detection. These "intelligent" malware strains are harder to predict and control, raising the risk of widespread system disruptions and massive data breaches.
  • Advanced phishing attacks: Generative AI has the capability to learn and mimic a user's writing style and personal information, making phishing attacks considerably more persuasive. Tailored phishing emails, appearing to originate from trusted contacts or reputable institutions, can deceive individuals into divulging sensitive information, posing a substantial threat to personal and corporate cybersecurity.
  • Realistic deepfakes: Thanks to generative AI, malicious actors can now create deepfakes – highly convincing counterfeits of images, audio, and videos. Deepfakes pose a significant risk for disinformation campaigns, fraudulent activities and impersonation. Picture a remarkably lifelike video of a CEO announcing bankruptcy or a fabricated audio recording of a world leader declaring war. These scenarios are no longer confined to the realm of science fiction and have the potential to cause significant disruption.

In addition, AI requires a lot of data, and companies need to limit exactly what is shared as it creates another third-party where data could be breached. Even ChatGPT itself suffered a data breach due to a vulnerability in the Redis open-source library, allowing users to access others' chat history. OpenAI swiftly resolved the issue, but it highlights potential risks for chatbots and users. Some companies have started banning the use of ChatGPT altogether to protect sensitive data, while others are implementing AI policies to limit what data can be shared with AI.

The lesson here is that while threat actors are evolving to use AI in new attacks, companies need to familiarize themselves with the potential threats of compromise in order to protect against them.


Ethical considerations of AI in cybersecurity

It would be remiss to talk about adopting AI in cybersecurity without mentioning the ethical considerations. It’s important to use responsible AI practices and human oversight to ensure the security and privacy. AI can only replicate what it has learned, and some of what it has learned is lacking. Thus, before adopting AI solutions companies should consider the ethical considerations, including the following:

  • Data bias amplification: AI algorithms learn from historical data, and if the data used for training contains biases, the algorithms can inadvertently perpetuate and amplify those biases. This can result in unfair or discriminatory outcomes when the algorithms make decisions or predictions based on biased data.
  • Unintended discrimination: AI algorithms may discriminate against certain groups or individuals due to biases in the training data or the features the algorithms consider. This can lead to unfair treatment in areas like hiring, lending, or law enforcement, where decisions impact people's lives based on factors beyond their control.
  • Transparency and accountability: Many AI algorithms, especially complex ones like deep neural networks, can be challenging to interpret and understand. Lack of transparency makes it difficult to identify how biases are introduced and decisions are made, leading to concerns about accountability when biased or unfair outcomes occur.

While right now it’s a bit of a wild west in AI, we will see emerging regulation requiring transparency and accountability to offset some of these privacy and ethical considerations. For instance, the European Commission has already been calling on major tech corporations such as Google, Facebook, and TikTok to take steps in labeling AI-generated content as part of their efforts to counter the proliferation of disinformation on the internet. As per the EU Digital Services Act, platforms will soon be obligated to clearly mark deep fakes with noticeable indicators.


The human-AI partnership in cybersecurity

Given the limitations of AI, Humans should always be the final decision makers, while using AI to speed up the process. Companies may use AI to be presented multiple options and then key decision makers can act quickly, thus AI will supplement, but not replace, human decision-making. Together, AI and humans can accomplish more than they can alone.

AI

Humans

  • Learn from data and patterns
  • Can mimic creativity but lack genuine emotions
  • Rapid processing and analysis
  • Virtually unlimited memory storage
  • Can scale to handle massive datasets
  • Lacks true self-awareness
  • Devoid of genuine empathy
  • Learn from experiences and adapt over time
  • Exhibit creativity and emotional understanding
  • Limited speed compared to AI
  • Limited memory capacity
  • Cannot easily scale certain tasks
  • Exhibit self-awareness and consciousness
  • Express empathy and emotional connection


Building digital trust in AI with PKI

The use of technologies such as Public Key Infrastructure (PKI) can play a fundamental role in protecting against emerging AI-related threats, such as deep fakes, and in maintaining the integrity of digital communications.

For example, a consortium of leading industry players, including Adobe, Microsoft, and DigiCert, are working on a standard known as the Coalition for Content Provenance and Authenticity (C2PA). This initiative introduced an open standard designed to tackle the challenge of verifying and confirming the legitimacy of digital files. C2PA leverages PKI to generate an indisputable trail, empowering users to discern between genuine and counterfeit media. This specification provides users with the capability to ascertain the source, creator, creation date, location and any modifications to a digital file. The primary goal of this standard is to foster transparency and trustworthiness in digital media files, especially given the increasing difficulty in distinguishing AI-generated content from reality in today's environment.

In sum, AI will develop many opportunities in cybersecurity and we have just scraped the surface of what it can do. AI will be used as both an offensive and defensive tool to prevent cyber-attacks as well as cause them. But the key is for companies to be aware of the risks and start implementing solutions now, while keeping in mind that AI cannot fully replace humans.

Share this content on your favorite social network today!