Cloud 101CircleEventsBlog
Register for CSA’s free Virtual Cloud Trust Summit to tackle enterprise challenges in cloud assurance.

Adopting AI-based Attack Detection

Adopting AI-based Attack Detection

Blog Article Published: 03/24/2022


This blog was originally published by LogicHub here.

Written by Willy Leichter, Chief Marketing Officer, LogicHub.

The security industry is long-overdue for real innovation with the practical application of emerging technologies around automation, machine learning, and artificial intelligence for attack detection.

The complaints from most security analysts follow a common refrain – there is too much noise and far too many alerts from too many security tools, making it difficult to find the real threats.

A major banking customer of ours came to us because more than 80% of their alerts in their Security Operations Center (SOC) were trivial or false positives. Their average time to respond to each alert was 42 minutes, and even with a team of 14 analysts, they simply couldn’t keep up with the thousands of alerts pouring into their SIEM every day. The math simply didn’t work.

Increasingly, security has become a big data problem as well. Our techniques for addressing it rely on older technology, like SIEM, that simply lacks the ability to scale to handle the volume of data and can’t take advantage of advancements in AI/ML to weed through the noise, make critical decisions, and find the needles in the proverbial haystacks.

It’s time to move past the ‘black-box’ AI hype

As artificial intelligence gains wider acceptance and becomes part of everyday life, the security industry has been lagging in the implementing practical applications of these technologies to reduce repetitive labor and improve the effectiveness of security solutions.

The industry only has itself to blame. For the last decade, there has been a constant marketing drumbeat around the benefits of AI, without clarity on how it was being applied. These claims of magical black-box techniques by much of the industry led to well-deserved skepticism by the market. While most vendors claim AI capabilities, very few can explain how they work, make them transparent to customers, or offer essential customization.

Gartner specifically calls out this gap recommending that product leaders should:

Improve adoption of AI-enabled solutions by moving away from a “black-box” approach toward explainable and customizable AI models that can be tuned based on analyst feedback. [1]

Misconceptions about both humans and AI in security

One of the most common misconceptions about security is that while computers can process large quantities of data, you really need seasoned human analysts to make any important decisions. While experienced analysts certainly can make smart decisions, most of the work they are doing is highly repetitive and mind-numbingly robotic, where humans are simply not that reliable.

Humans don’t make good robots, and we shouldn’t ask them to do robotic work. Modern automation systems can break down complex tasks into a series of playbook actions, learn how human analysts make decisions, and then perform these repetitive tasks millions of times faster, and more reliably than humans can.

‘I know it’s bad when I see it’

While not all security experts can build complex playbooks, experienced analysts can often recognize bad activity when they see it. This point is valid, and well-designed AI systems should take advantage of human experience and constantly use human input to improve accuracy.

Gartner calls this out as a key need in AI attack detection and recommends that solutions should include: The capturing of the skills, expertise, and techniques of security analysts for use cases, such as data labeling, threat hunting, automated investigation, and response and remediation. [2]

Don’t wait for expert intuition to start threat hunting

While many organizations are embracing automation for alert triage and incident response, we find that threat hunting is left for last, or not addressed at all. A key reason for this is the misconception that you need the most highly trained “ninja” analysts to take this on, and with resources strained just responding to daily alerts, this always remains on the future wish list.

While experts are important, it’s critical to leverage AI and machine learning (ML) to capture their expertise and make their techniques and responses repeatable and scalable to address the ever-expanding threat landscape.

The following excerpt from the Gartner report addresses the need for innovation in AI Attack Detection:

  • The use of AI to automate the tasks — such as threat hunting — of skilled cybersecurity staff by encoding the staff’s domain expertise and techniques can help address the shortage of skilled personnel.
  • The ability to leverage AI itself to customize the AI models for each customer’s environment offers significant scaling opportunities.
  • There is the need to move away from a black-box AI approach toward explainable AI models that can offer explanation behind their decision in a human understandable way and incorporate feedback from experienced human analysts. [3]

[1, 2, 3] All Gartner quotes in this blog post are from Emerging Technologies: Tech Innovators in AI in Attack Detection – Demand Side, Gartner, 2021


About the Author

Willy Leichter is LogicHub’s Chief Marketing Officer. Willy has extensive experience in application security, network security, global data privacy laws, data loss prevention, access control, email security and cloud applications. He has held marketing leadership positions at Virsec, CipherCloud, Axway, Websense, Tumbleweed Communications, and Secure Computing (now McAfee).

Share this content on your favorite social network today!