Cloud 101CircleEventsBlog
Help shape the future of cloud security! Take our quick survey on SaaS Security and AI.

Five Levels of Vulnerability Prioritization: From Basic to Advanced

Five Levels of Vulnerability Prioritization: From Basic to Advanced

Blog Article Published: 09/04/2024

Originally published by Dazz.


Vulnerabilities are being disclosed at record pace. Since the common vulnerabilities and exposures (CVE) program was established by MITRE in 1999, there have been over 300,000 unique vulnerabilities published - and a significant portion of these have been found in the last few years.

Since many of these vulnerabilities are disclosed in software and operating systems that are incredibly common, security teams have found themselves drowning in vulnerability findings to the point where it’s now accepted that there will always be more vulnerabilities found than you have capacity to fix.

This is why vulnerability prioritization has become so vital - shortlisting which vulnerabilities present the most risk and need to be fixed is a necessary function for any company regardless of resources. It’s become a staple of Application Security Posture Management (ASPM) and Continuous Threat Exposure Management (CTEM) practices, but many teams still don’t know how to establish a more advanced vulnerability prioritization framework.

The good news? There are so many different data points and tools most teams have at their fingertips for effective vulnerability prioritization. However, many teams are only taking advantage of one or two of these data points, limiting their ability to truly know what needs to be fixed first.


The Levels of Vulnerability Prioritization in 2024

In this post, we’ll discuss five facets of vulnerability prioritization ranging from basic to advanced. These are:

  • Vulnerability severity
  • Threat intelligence and exploitability
  • Asset context and exposure
  • Business context
  • Effort to fix


Vulnerability severity

The severity of a given vulnerability is the first point of context teams often consider when prioritizing vulnerabilities. There are many different data points on severity: certain detection tools may weight vulnerabilities with custom logic, and of course there is the Common Vulnerability Scoring System (CVSS). CVSS has been updated many times over the years, and is supported by nearly every security tool. It’s a great place to start as the score itself accounts for how complex the vulnerability is to exploit, whether the attack is automatable by attackers, the privileges required for attackers, and more.

Due to its comprehensiveness in assessing the vulnerability, CVSS has become a standard for many teams. However, with vulnerabilities being exploited at record pace, many teams have found CVSS on its own to be limiting, and have started to incorporate threat intelligence feeds.

Threat intelligence and exploitability

It’s one thing to know how easy it is to exploit a vulnerability, but another to know that it’s actually being exploited right now. There are many tools teams have at their disposal to get a sense of how likely a certain vulnerability in their environment could be exploited, by which threat actors, and more. Bonus: some of these are open source!

One great data point is the CISA Known Exploited Vulnerabilities (KEV) catalog, which was started in recent years. This list features vulnerabilities that are known to be exploited in the wild and government organizations that fall under CISA directives are obligated to apply mitigations or remediate these vulnerabilities within a specific time frame. However, this list also provides a great shortlist of vulnerabilities to remediate for commercial organizations - even if they’re not subject to the same timelines.

However, CISA may often lag actual attack behavior. What if there is a critical vulnerability, but no confirmed attack behavior known just yet? That’s why the Exploit Prediction Scoring System (EPSS) is another great open source tool that can be used to actually estimate the likelihood that a vulnerability will be exploited in the next 30 days. EPSS is derived from hundreds of variables and driven from both open source and commercial data sets.

Hint: not all high severity vulnerabilities are likely to be exploited!

Going even further - you may want to compare which vulnerabilities are present in your environment against specific threat actors, or specific attack types. This is where Threat Intelligence providers can offer custom tailored feeds that highlight the specific vulnerabilities being exploited in organizations similar to yours.

By using any combination of the threat intelligence sources above, teams can account for not just the impact of any security vulnerability, but the likelihood that it may be exploited in their environment. This drastically improves vulnerability remediation.

Asset context and exposure

You’ve understood the severity and likelihood that a given vulnerability might be exploited. How do you begin to assess the potential impact of the risk equation (risk=likelihood*impact)?

This is where understanding the context and exposure of all vulnerable assets is important. For instance, you’ll want to understand:

  • Are the vulnerabilities present on public-facing systems? Or would they require a foothold on internal networks to exploit?
  • What is affected? Is it a code repository, a public-facing server, or a data store?
  • What mitigating controls might I have in place that could lessen the likelihood of exploitation (network segmentation, firewalling, EDR, etc)

Getting answers to these questions shouldn’t necessarily tell you whether or not to remediate a vulnerability. Instead, they should inform the timeline of how quickly you remediate a given vulnerability.

Business context

While it’s obvious that business context should drive vulnerability prioritization, doing so isn’t easy without the right data. Nearly everyone would agree that a medium level severity vulnerability affecting an application that generates millions of dollars in revenue is more important than a high level vulnerability that may be found on a network printer.

However, many security teams aren’t easily able to join vulnerability data with business context. This is where correlating data with CMDBs, development platforms and documentation, directory systems and more helps. Once vulnerability data is enriched with business context, vulnerability prioritization becomes a lot more effective. Also, once you’ve added business context, executive reporting becomes a lot more specific and actionable.

Effort to fix

One consideration many teams often ignore is the effort required to fix a vulnerability. Again, this may not determine whether you fix it, but it can dramatically alter your timeline to do so!

So, how does one estimate the effort to fix? It all starts with root cause analysis. If you understand the root cause of a vulnerability, you can get a better grasp on what’s needed to fix it.

You may panic when you see that you have dozens of vulnerabile containers running in production. However, it’s possible that just fixing the container base image may eliminate all the downstream vulnerabilities.

On the other hand, you may only have one vulnerable server running a legacy application, but it needs to be rebuilt entirely even without sufficient mitigating controls in place. It is hard to estimate the true time needed to fix, but with the complete context into what is impacted by the vulnerability, where the vulnerability originates, and who is best to fix it– you can start to estimate the fix effort reliably.

Expected results when you move from basic to advanced

The strategies outlined here can take your vulnerability prioritization practice from basic to advanced. But what can you actually achieve with more advanced prioritization?

  • Shrink your backlog: you’re able to quickly agree on which vulnerabilities need to be fixed - and which ones can be closed or risk-accepted. This allows you to quickly reduce the noise and shrink your backlog into the issues that actually matter
  • Reduce remediation time: A smaller backlog contextualized with the time needed to fix will in turn kick off the remediation process faster. This will lead to a meaningful drop in mean-time-to-remediate (MTTR), average vulnerability age, and other key metrics for your vulnerability management program.
  • Reduce business risk: by prioritizing and remediating vulnerabilities that actually present the most risk to your business, you’ll be able to meaningfully reduce overall business risk. There are many metrics you could use to infer gains here, such as loss avoidance, overall security posture metrics, and more.