Cloud 101CircleEventsBlog

How to Solve Complex Cloud Security Problems with AI

How to Solve Complex Cloud Security Problems with AI

Blog Article Published: 09/16/2022

Written by Morey J. Haber, Chief Security Officer, BeyondTrust.

Artificial intelligence (AI) and, to a lesser extent, machine learning (ML) have become increasingly prevalent as a solution to solve complex cybersecurity problems. While the cloud has made this more practical due to shared resources and sanitized data from multiple sources, the implementation is still not straightforward for many organizations.

Just because artificial intelligence is able to supplement human analysis of security events and identify indicators of compromise doesn’t mean it gets it right all of the time. In some cases, it may be more of a burden than traditional correlation and logic. However, for other use cases, the results are a gold mine for threat hunting and for revealing indicators of compromise that would otherwise be too subtle for legacy tools to notice.

This article examines how you can model complex cybersecurity problems with AI, even when working with multicloud environments. In addition, it will help you realize the benefits of AI, and when you should implement it to get the best possible outcome for your cloud security.

What is AI technology and how does it work?

AI is an approach that allows assets to acquire intelligence similar to how living organisms learn to use algorithms. Notice that we did not state “humans” in this definition. This is because the learning patterns of insects and rodents have also been duplicated with artificial intelligence, and the results have been quite useful. As an example, consider models for swarm intelligence that leverage the behavior of bees to perform complex outcomes for security when multiple assets are operating in relative synchronization.

Whether it is modeled after human, animal, or insect intelligence, the secret to AI technology is that it can learn from repeated interactions with situations and events. During these repetitions, it develops correlations and predictions about current and future behavior. Artificial intelligence algorithms can discern information from a data series without dependence on a previously determined relationship or characteristics. Training occurs as it does with living organisms, and relationships are further strengthened by repetition and reinforcement. This approach has grown in practical terms, with the increase in computing power that is now available in the cloud and the prevalence of multitenant correlated datasets, all of which are allowing for the aggregation, ingestion, and analysis of very large datasets and events from similar data sources.

While artificial intelligence enables a level of reasoning that mimics a living organism, the ability to analyze data at this volume and speed is impractical for living tissue compared to a computer. Since this holds especially true with regard to the scalability of the cloud resources that could be performing this task, AI has become a powerful resource for cybersecurity.

How can AI help with cybersecurity?

Due to the considerable volume, diversity, and complexity of data created by modern information networks, artificial intelligence can be a useful way to supplement human analysis of security events and identify indicators of compromise. In particular, AI can help security analysts by:

  • Detecting when abnormal behaviors occur in periodic or mundane events that would normally be overlooked.
  • Detecting when events in the cloud could potentially indicate an attack or suspicious event.
  • Modeling behavior of human and machine identities to identify anomalies and inappropriate actions.
  • Modeling and reporting on the risk surface for assets being monitored, and assessing vulnerabilities and exposures that might occur due to behavior or configuration changes.
  • Correlating the information from known, and potentially unknown, attack strategies that may escalate the potential for inappropriate actions or behavior.

This analysis is particularly useful in situations where behaviors can be unpredictable, where identities and processes are ephemeral, and where targeted rules defined in policies are problematic in determining if an event was malicious or not.

As noted above, artificial intelligence can be used to assess anomalies, inappropriate actions, and suspicious events that could indicate compromise, but it doesn’t stop there. AI can also be used to build a foundation for these situations. The results can be correlated by other external sources, but AI will serve as a contextually relevant tool to assess what threat actors have potentially impacted an organization and to what degree.

AI can create initial relationships and then strengthen or weaken those relationships based on continuous analysis. This is crucial when a single event in time may go unnoticed. To an AI engine, that single event in time becomes relevant when other relationships can be established.

As an example, CIEM (cloud infrastructure entitlements management) solutions operating across multicloud environments can use AI models to enumerate privileges and perform advanced threat detection when correlated back to linked identities. Even though privileges are different depending on the cloud environment, identities present in both, such as root or an individual user, can be processed by AI to look for inappropriate behavior based on privileges owned and privileges accessed.

When you consider modern security initiatives like Zero Trust, AI becomes a cornerstone for the behavioral monitoring tenants specified in NIST 800-207. It is one thing to determine that authentication has failed using traditional logic; it is a different problem to determine that the behavior of the identity is inappropriate at any time during a session, once access has been granted.

By monitoring the session and looking for inappropriate tasks compared to previous access over time, AI can help identify and even predict when something is awry. Some may argue this an ML scenario (which we will discuss later), but if the AI engine has never been trained on what is appropriate session behavior, it will have to determine the outcome for itself. This is why AI, as a modern cybersecurity tool and paired with initiatives like Zero Trust, can help make the future of cybersecurity more efficient.

What cybersecurity challenges can AI help to overcome?

Where AI becomes particularly helpful is in resolving the challenges where human and other technological limitations become a hindrance for security. Some of these core challenges include:

  • The inability of humans to interpret raw security event data due to complexity. AI can help interpret these into human-readable events for investigations or general awareness.
  • The volume of security events can be overwhelming for humans to analyze or require high costs to ingest into tools like a SIEM. AI can help distill the quantity into manageable forms lowering costs and providing deduplication services.
  • Based on geolocation and regulatory compliance requirements, security information may require to be obfuscated or sanitized to protect personally identifiable information. AI can perform the required analysis on obfuscated data streams when simple fields like usernames and IP addresses have been hashed to protect information.
  • One of the biggest challenges facing organizations today is the lack of expertise at reasonable costs and availability to monitor for inappropriate behavior. AI can help bridge that gap by taking the mundane security tasks out of equation and, thus, freeing up valuable resources for more important tasks.

What’s the difference between AI and Machine Learning?

AI should not be confused with machine learning. It is important to distinguish the two terms since they are so often misused or used interchangeably. Machine learning is actually a subset of AI, where the algorithms have been predefined for a specific data type and expected output. Machine learning is best characterized as fixed algorithms within artificial intelligence that can learn and postulate, while true AI is a step above, developing for itself the new algorithms needed to analyze data.

AI is more analogous to a human learning when a new behavior is needed without a previous frame of reference. Therefore, artificial intelligence is more associated with the interpretation of information that is learned to drive conclusions or make decisions, whereas, to work effectively, machine learning must already have an awareness of the scope of data being processed.

Because of this relationship, many machine learning implementations are part of, or afterthoughts, when an artificial intelligence application project has been fully understood, and when the results can be modeled for better efficiency.

In the case of CIEM or Zero Trust, every company will have different cloud environments of the two. The event streams will look different based on workflows and will contain a wide variety of session access information. For AI, this is not a problem to analyze and predict results. For ML, details regarding the session types and workflows would need to be defined (applications, session type, geolocation, identities, etc.) to provide meaningful results.

The decision to choose AI over ML is simple for most organizations, since you can invest countless hours modeling and training an ML model to predict results or feed an AI an event stream and let it process the work for you. We truly are looking at something so simple, but understanding the results, especially presented as statistics only, is a different discussion for interpretation. As an example, AI predicts a percent of inappropriate activity based on the data it has ingested. What is the threshold for confidence that you should receive an alert or take an automated action, and if it is a false positive, suppress that result in the future?

When should I choose AI over Machine Learning for my security?

The debate of ML versus AI has been considered time and time again by mathematicians and their IT security counterparts. In fact, as we have discussed, the terms are often confused and misused to address a problem. This begs the question, when should one be used over the other? Consider these questions:

  • If given enough time, could a human analyze the data set and determine the same outcome?
  • Is there a definable answer to the security question or is the answer something unknown?
  • Are the contents of the data set known and definable or do they contain information that can be quantified?
  • Will the data always be in a specific format or contain information in a definable pattern?
  • Can you define the problem and articulate (code) specific outcomes?

As you answer these questions, you will begin to discover for yourself the correct choice. If the outcome and data sources are predictable by nature, ML is the best choice to process the information and determine the results. If the data is unpredictable and the outcome is uncertain, then AI is the best choice.

This also raises the question, is ML just a form of good correlation and pattern matching? The answer is not simple from a commercial point of view. Many solutions on the market today are based on algorithms that are seeded in ML technology but augmented by rules-based engines and statistics to drive specific responses. They are marketed as ML technology, but they clearly have canned responses based on data analytics.

This hybrid approach is also true for AI. There is no reason to choose a technology based on AI alone when portions of the solution could use ML, pattern matching, correlation, and even rules. In fact, these will probably work best for your organization since the basics will be covered using established methods and advanced techniques that are already known to cover outliers.

Therefore, when trying to choose a solution, don’t make the answer Boolean. Both AI and ML have use cases for your security teams, and a hybrid approach will provide the best results. After all, you do not need an AI engine to trigger basic events like multiple login failures, especially from a foreign geolocation. Simple rules can do that alone.

The top 6 outcomes of using AI to improve Cloud Security

If we keep in mind that security analysts also have varying levels of effectiveness in their roles, consider these potential outcomes of using AI to improve cloud security:

  1. AI can help reduce the data sets in a critical situation to a more manageable level simply by filtering out noise. A security analyst can then better focus on the problem, not on unrelated events.
  2. AI can reduce stress and mistakes in crisis situations, where the heat of the moment can limit visibility, dull human senses, impair the ability to interpret information, and lead to false conclusions.
  3. AI can minimize the repetitive work that frequently reduces the effectiveness of a security team by preprocessing data sets into meaningful collections.
  4. AI can significantly reduce analyst burnout arising from the need to make decisions based on correlated data in events, logs, and alerts with attack patterns. The noise can simply be reduced or even eliminated.
  5. AI can be implemented initially to detect advanced persistent threats as a part of threat hunting exercises, and it can operate unsupervised in controlled circumstances. This releases the security analyst to handle more appropriate tasks and to act as the final arbiter of processed decisions, as opposed to hunting for a threat that might not even exist.
  6. As the implementation matures, AI can be relied on to handle entire classes of security events that previously relied on human intervention. The human element is critical for oversight, but the mundane component could potentially be eliminated. If done correctly, this type of task can be automated, releasing manhours of work.

AI is an important mitigation strategy—but it shouldn’t be the ONLY solution you deploy

Artificial intelligence is a useful tool to supplement security best practices in the cloud, but it should not be treated as the only method of detection, prevention, and response. Nothing will completely replace the need for the security basics, security analysts, and cyber forensics needed to dig deep or to hunt for adversaries.

One thing that remains certain is that AI will continue to evolve. Once the security basics have matured, AI should be considered a detection and mitigation technology within your cloud strategy. And, as you continue this journey, consider the evolution of ML, correlation, pattern matching, etc. to improve your response.

About the Author

Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.

Share this content on your favorite social network today!