How Legacy AST Tools Fail to Secure Cloud Native Applications
Published 12/17/2021
Written by Ron Vider, Co-Founder & CTO of Oxeye
Organizations worldwide are building and deploying cloud native applications, where the architecture is quite different from yesterday’s monolithic counterparts. What used to be a custom code block installed on a single bare metal server or a virtual machine has morphed into hundreds of small, independent pieces of code. These are installed on loosely-coupled microservices, executed as orchestrated containers, and deployed in the cloud.
Distributed cloud-based applications pose challenges to yesterday’s application security testing (AST) solutions. Herein is an overview of AST (Application Security Testing) and the challenges that DAST, SAST, IAST, and SCA tools face when assessing vulnerabilities in cloud native applications.
As part of the software development lifecycle process (SDLC) process, companies today leverage legacy AST tools to run security scans against their applications. But these generate many false positives and miss critical vulnerabilities. They fail in their effort to secure cloud native applications.
More specifically, these solutions do not fully secure today’s containerized applications built using distributed microservices. What used to be a vulnerability that started and ended with the same monolithic code segment, is now an exposed vulnerable flow involving multiple microservices and infrastructure layers.
Many organizations report that AST tools are outdated and do not provide effective results – leading to frustration and waste of development resources. Migration to the cloud has a major effect on how vulnerabilities come into existence, such that cloud native application security testing requires a new approach.
The AST spectrum
Not having been designed to test cloud native applications, AST tools can be divided into the following categories:
- Dynamic application security testing (DAST)
- Static application security testing (SAST)
- Interactive application security testing (IAST)
- Software composition analysis (SCA)
- API security testing tools, manual testing tools, and fuzzers
Let’s rank each category for the following vectors: developer friendliness, security coverage, and accuracy of results. Scoring is not scientific in any way. But, it provides an impression of the aspects of each tool category.
DAST – Assessing vulnerabilities in cloud native applications at runtime is the function of DAST tools. But DAST scanners only test exposed HTTP and HTML application interfaces. They crawl a web application, collecting information about exposed entry points (e.g., URLs, parameters, cookies), then actively initiate attacks such as SQL injection and cross-site scripting (XSS).On the plus side, modern tools can also perform scans at the individual microservice level.
Developer friendliness – 3 | External tool created for penetration testers. Doesn’t provide code-level remediation guidance |
Testing coverage – 5 | Highly dependent on application crawling quality |
Results accuracy – 7 | Findings are usually exploitable and accurate, might miss vulnerabilities due to poor application coverage |
SAST – Looking for coding and design conditions indicative of security vulnerabilities, these tools analyze application source code, byte code, and binaries in a non-running (static) state. To locate application layer vulnerabilities, SAST tools detect the source function – the “entry point” where user input is entered – and the sink function (e.g., database call, system call) that eventually uses the user input.
Developer friendliness – 10 | Dev-centric, integrated into IDEs, provides code-level remediation guidance |
Testing coverage – 8 | Sees most of the code base, but lacks visibility into external components such as public cloud services |
Results accuracy – 5 | Highly prone to false positives and usually reports non-exploitable issues; often oblivious to custom input sanitization or validation functions |
IAST – IAST tools use instrumentation that combines DAST and SAST techniques to increase accuracy. It permits DAST-like confirmation of exploit success and SAST-like application code coverage. In some cases, IAST enables security self-testing during general application testing.
Developer friendliness – 8 | Geared toward developers by flagging vulnerable lines |
Testing coverage – 5 | Dependent on application crawling or testing quality |
Results accuracy – 7 | Similar to DAST |
SCA – To secure all open-source components, their license, and any known security vulnerabilities, SCA tools scan an application’s source code – including related artifacts such as containers and registries.
Developer friendliness – 10 | Perfect for developers, integrated into CI/CD process |
Testing coverage – 2 | Only flags known vulnerable open-source packages, |
Results accuracy – 5 | Accurately flags known issues in open-source packages, but fails to find vulnerabilities in the remaining applications |
API testing & manual fuzzers
Web API testing performs fuzz testing of input parameters. Setting them to unusual values causes unexpected behavior and errors in the API backend. This helps with discovering bugs and potential security issues that other QA processes might miss.
Developer friendliness – 3 | Aimed at penetration testers or software quality testers |
Testing coverage – 2 | Only flags issues in components that have been manually crawled or functionally tested |
Results accuracy – 3 | Usually flags behavior anomalies without fully understanding issues at hand or potential exploitability |
The diagram below summarizes the scoring for each of the tool categories:
This table lists the shortcomings of existing/legacy AST tools in relation to efficiently scanning cloud native applications:
AST Category | Why It Doesn’t Work | Scan Outcome |
DAST | DAST results are based on inspection of external behavior (reaction), rather than on inner microservices activity | Many false negatives arise due to lack of proper coverage and inability to detect internal vulnerabilities. Lack of context results in false positives |
SAST | Tests each microservice separately while ignoring context and the big picture | False negatives/positives due to scanner inability to assess full context and application flow |
IAST | Cloud native applications require deployment and maintenance of dozens or even hundreds of IAST agents | When deployed properly, has the highest chances of producing quality results. Might miss vulnerabilities due to improper coverage (similar to DAST) |
SCA | Only flags known issues in open-source packages | Extremely low testing coverage. Completely disregards custom application code |
API testing | Only capable of finding local issues in each microservice. Doesn’t see the full context and completely disregards application logic | Some application behavior anomalies can be detected at the individual microservice level |
The challenges of legacy AST tools face when assessing vulnerabilities is well understood. Cloud native application security testing requires a different paradigm with respect to how vulnerabilities are found, assessed, and resolved. Future analysis will reveal the downsides of these solutions when scanning cloud native applications.
About the Author
Ron Vider is the Co-founder and CTO of Oxeye, a company focused on cloud native application security testing. The cybersecurity expert has demonstrated management and leadership skills with experience in network security, web application security, and development.
Related Articles:
Six Key Use Cases for Continuous Controls Monitoring
Published: 10/23/2024
Rowing the Same Direction: 6 Tips for Stronger IT and Security Collaboration
Published: 10/16/2024
Secure by Design: Implementing Zero Trust Principles in Cloud-Native Architectures
Published: 10/03/2024
Elevating Application Security Beyond “AppSec in a Box”
Published: 10/02/2024