Four Ways to Quickly Determine Your Atomization Issue and Next Steps to Fix it
Published 07/27/2023
Originally published by Netography.
Written by Martin Roesch, CEO, Netography.
Network atomization didn’t happen overnight. It’s been a progression over the last fifteen-plus years driven by digital transformation, a rise in multi-cloud strategies, and the shift to a hybrid workforce. And now, we’re seeing the Boiling Frog phenomenon: some of the core approaches to network security are being rendered obsolete and nothing has replaced them. As a result, organizations are experiencing a confluence of network traffic monitoring and security challenges that make it difficult to adequately protect their Atomized Networks.
However, if we look at the definition of an Atomized Network – a network environment that is dispersed, ephemeral, encrypted, and diverse – we can map security issues to these four areas and understand how to fix them.
1. Dispersed – the “where” issue.
Modern enterprise networks are comprised of multi-cloud, hybrid-cloud, and on-premises infrastructure. Getting network visibility and control where we need it in dispersed environments is problematic because classic sensor-based architectures make it difficult to provision sensors everywhere they need to be. Additionally, sensor-based architectures are very limited in their vantage point – they only see the traffic passing very specific points on the network. And cloud-native security vendors focus on providing visibility into cloud environments with little regard for the fact that a modern enterprise could be atomized and, therefore, not just in a cloud but also on-prem.
The fix:
What’s needed is a cloud-native solution that leverages the underlying network infrastructure without the need to deploy software or hardware so that any place organizations need visibility they can have it immediately.
2. Ephemeral – the “when” issue.
Getting network visibility and control when we need it given the ephemeral nature of today’s modern networks is also a problem. This is especially true in cloud environments where workloads can spin up and down in a matter of minutes. When services can come and go – often without the knowledge or buy-in of security teams – there has to be a way for network visibility and control to come and go in lockstep. The standard way of provisioning isn’t well-suited to environments where we might not even be aware of all the networks we really have and where they are, let alone if changes have happened that require a change to provisioning. And the ability to dynamically provision more capability is extremely limited because it is tied to being able to deploy software either wrapped in a piece of hardware or in a virtual appliance, sidecar load, or agent.
The fix:
Organizations need the ability to dynamically provision with the same dynamism as the environment they are operating in.
3. Encrypted – the “what” issue.
The pervasive use of encryption is blinding deep packet inspection (DPI) technologies we’ve traditionally relied on to detect attacks. So, in environments where we have sensors that can get clear text data we have one set of security capabilities, and in other environments where we have to do decryption, we have a different set of capabilities. The disparity in what we get increases cost and complexity, and we still may not have any real awareness of when and where we lack visibility and if the capability deployed is able to do its job. Attackers take advantage of the opacity and gaps that make it difficult to detect and stop attacks.
The fix:
With an approach that is unhindered by encryption, we can have an equal set of capabilities everywhere all the time.
4. Diverse – the “how” issue.
Atomized networks can consist of up to three types of environments: IT, cloud, and operational technology (OT) environments. In many instances we have different tools for each environment, including in each of the different clouds, run by different teams. These tools may or may not be looking for the same things and they all use their own discrete languages to define what a hostile activity or an anomaly looks like. Each tool frequently has its own configurations and threat definitions, and its own eventing and reporting platform. How these tools generate events and report on them will vary, which can provide very different results and no cohesive picture for common understanding and comprehensive risk mitigation.
The fix:
Organizations need a way to bring teams and tools together so they can be more efficient and effective.
We believe the Atomized Network is not just the new normal – it’s fundamentally changing what’s required for effective network traffic monitoring and security.
Related Articles:
The Lost Art of Visibility, in the World of Clouds
Published: 11/20/2024
Group-Based Permissions and IGA Shortcomings in the Cloud
Published: 11/18/2024
9 Tips to Simplify and Improve Unstructured Data Security
Published: 11/18/2024
How AI Changes End-User Experience Optimization and Can Reinvent IT
Published: 11/15/2024