Cloud 101CircleEventsBlog
Master CSA’s Security, Trust, Assurance, and Risk program—download the STAR Prep Kit for essential tools to enhance your assurance!

On the Criticality of SDLC Context for Vulnerability Remediation

Published 01/25/2023

On the Criticality of SDLC Context for Vulnerability Remediation

Originally published by Dazz.

Written by Eyal Golombek, Director of Product Management, Dazz.

Risk can go undetected when full context of the SDLC is missing

Risk to cloud environments originates from multiple possible sources. Managing cloud risk requires a deep understanding of how that risk ends up in the cloud in order to remediate it. For organizations that have internal development teams, a major vector that generates risk to the cloud is the SDLC. In order to mitigate this risk vector, security teams implement practices around application security. Existing common guardrails and solutions are siloed and miss the bigger picture of the SDLC. Choosing best-in-breed detection tools is a good approach, but requires an extra step of mining all the detection data into coherent and effective high-fidelity actionable content.

For example, common solutions such as SCA (software composition analysis) scan the code managed by an organization. They do a great job of identifying the usage of third-party software packages that introduce vulnerabilities in the form of CVEs. However, SCA tools (and other code scanners) are siloed to the code itself. They lack the context of the pipeline, the time of deployment, and the cloud environment. This context is crucial for security teams to fully assess risk and devise a remediation plan.

Consider this scenario: how to address the risk of third-party packages in containerized environments. A container image is almost always built on top of some public third-party base image. A common Dockerifle might start with a line similar to the following:

The container that will be built using this line of code, will utilize the python official container image as its base. The specific image that will be used is the one that is tagged by 3.10-slim. When a code scanner scans this code file it looks for vulnerabilities in the current image tagged by 3.10-slim at the moment of the scan. The problem is that tags such as this are mutable. The image that is tagged 3.10-slim today isn’t the same image that had this tag a week ago. This difference between the images causes different risk profiles since each image has different CVEs. A code scanner that only sees the code lacks the understanding of when this image was built. The element of time is crucial to accurately measure what vulnerabilities are inside the container image that ends up running in our cloud.

As you see from this example, a code scanner might tell us that we are secure and there are no vulnerabilities in our container, when in fact, the actual image that is running in our cloud environment is vulnerable since it was built a few months ago. Ultimately what matters most is the risk that finds its way to our cloud environment. Code scanning is a good starting point, but it lacks crucial context to portray an accurate picture of our risk posture.

Full code-to-cloud context to the rescue

Addressing the entire code-to-cloud process in a holistic view allows security teams to accurately assess risk in their cloud and build efficient remediation plans. Seeing the code-to-cloud process as a whole allows teams to incorporate detections coming from multiple different stages of the pipeline. When such multiple data points are taken into consideration, security teams can overcome the cons of each individual detection tool and generate a clear risk picture.

Going back to our code scanner example, the context lacking is the cloud environment. In order to accurately assess the risk we need to understand exactly when the container image was built from this vulnerable code, and what third-party base image was actually used. Moreover, usually, more than one container image is built using the same vulnerable code file, at different points in time. To clearly assess the risk we also need to understand what image is actually running in our cloud environment and actively poses a risk to the business.

In order to overcome these challenges we need to understand the entire code-to-cloud process. This can be achieved by attaching data points from the artifact store and the cloud environment itself. Pipeline mapping engines can automatically incorporate the context of the code and the code scans, together with the context from the container registries along with live runtime context from the cloud environment to build the full picture. Having this context lets teams prioritize better, clearly understand their true risk and how risk really affects their cloud environment, and build an efficient remediation plan.

Share this content on your favorite social network today!