Cloud 101CircleEventsBlog
Register for CSA’s free Virtual Cloud Trust Summit to tackle enterprise challenges in cloud assurance.

It's Time to Put AI to Work in Security

It's Time to Put AI to Work in Security

Blog Article Published: 05/31/2022

It-s-Time-to-Put-AI-to-Work-in-SecurityThis blog was originally published by LogicHub here.

Written by Willy Leichter, LogicHub.

While we’ve been talking about and imagining artificial intelligence for years, it only has recently started to become mainstream, and accepted for a wide range of applications – from healthcare analytics to Google Maps and Roombas. At the same time, cybersecurity has been strangely slow in embracing this important technology.

There are several reasons for this: too much hype about early “black box” AI claims; misconceptions about the technology; and a persistent belief among the security old guard that AI can’t be trusted and will never match human intuition for spotting the real threats.

It’s time to move beyond these misconceptions, and look at concrete examples of how automation, AI, and machine learning have been effectively applied to improve security coverage and accuracy, while dramatically reducing costs. The real question is not whether AI or human intuition is better – it’s how we can effectively combine the two, enabling intelligent automation that takes on routine decision making by learning from human experts.

Urgency of the Problem

Because we’re in the middle of a cybersecurity crisis, we need to move beyond theoretical discussions of the pros and cons of AI and get serious about adopting more advanced technology to meet urgent needs. Today’s realities make updating our approach to security an imperative:

  • A significant increase in the number of attacks, and damage caused – ransomware payments, OT network shutdowns, loss of corporate IP, and loss of private information
  • Attacks are becoming increasingly sophisticated, bypassing many of our legacy security strategies
  • There is a huge shortage of skilled security analysts, and they don’t want to be burdened by repetitive manual work – throwing bodies at the problem is neither practical nor effective
  • The legacy perimeter security model is obsolete, as the battle is moving to widely distributed cloud applications, and virtual infrastructure
  • The attackers are embracing AI and ML whole-heartedly, launching more sophisticated attacks that quickly learn and adapt to our inadequate defenses.

Misconceptions About Artificial Intelligence

AI has become such a buzzword that few of us stop to think about what it practically means. Hollywood has filled our imaginations with benign and frightening images of AI – from R2D2 to the Terminator, but the reality we’re more likely to see are devices like the Roomba. This faceless, personality-less device uses AI to navigate obstacles, plan its routes efficiently, and take care of repetitive task that many of us would rather avoid.

AI Needs to be Explainable and Customizable

Gartner has published a series of insightful reports on the emerging use of AI in Attack Detection. One of their key findings is that “an inability to customize and audit artificial intelligence (AI) models is a major inhibitor to adoption.” They also recommend that emerging technologies need to “move away from a “black box” approach toward explainable and customizable AI models that can be tuned based on analyst feedback.”

It's fair to be skeptical about vendor claims of AI “magic” that only they can see or understand. But dismissing it all as hype, ignores many examples where AI-driven automation is having a significant impact.

Combining Human Skills vs. Machine Skills

The human brain is remarkable at making judgements and decisions extremely rapidly, based on subtle signals and acquired experience. While this is often thought of as intuition, it’s really accumulated experience, and dozens of quick decisions that are made almost subconsciously. In fact, the field of autonomous vehicles, has struggled to replicate the thousands of decisions humans can make in unexpected situations while driving down the road.

In the security context, experience analysts can quickly, and accurately spot isolated incidents and suspicious activity, without releasing each factor used in decision-making. Many people will simply refer to this as “I know it when I see it.”

This is an ideal environment for machine learning. While humans can’t easily isolate all the factors they use in decision making, having a feedback loop with human review allows machine learning models to quickly adjust and adapt as analyst give the thumbs up or thumbs down to automated results.

Even limited machine learning can yield huge results in security. Many analysts complain that 80-90% of their jobs are spent chasing routine, trivial, repetitive, and often false alerts. The tasks they go through analyzing these alerts also tend to be repetitive and robotic. By identifying these factors, automation driven by machine learning can eliminate the majority of these repetitive tasks, and perform these tasks at machine speed, and more reliably than humans.

Share this content on your favorite social network today!