Cloud 101CircleEventsBlog

Is Managed Kubernetes the Right Choice for My Organization?

Is Managed Kubernetes the Right Choice for My Organization?

Blog Article Published: 05/07/2024

Originally published by Tenable.

Written by Mark Beblow.

Many enterprises have adopted container technology because it helps them to streamline the building, testing and deploying of applications. The benefits of container technology include better resource efficiency, portability, consistency and scalability. However, as organizations ramp up the number of deployed containers, management overhead increases accordingly. To deal with the management overhead problem, organizations are adopting Kubernetes, which has become the de facto standard for container orchestration. In this blog, we will delve into Kubernetes security and outline reasons why you should choose a managed Kubernetes service instead of managing it in-house.


Managed or self-hosted?

You have many choices for deploying Kubernetes. There are more than 90 certified Kubernetes offerings, including customized distributions, managed environments and installers. You can either self-host or leverage managed Kubernetes services such as Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS) and Google Kubernetes Engine (GKE).

If you opt for self-hosted Kubernetes, as the name implies, you’ll have to manage the complete environment. With managed Kubernetes, the cloud service provider manages the Kubernetes control plane components - including hardening, patching, availability, consistency of dependencies, scaling, and backup management. In some cases, such as Google Kubernetes Engine (GKE) Autopilot clusters, GKE also handles node management, provisioning and scalability.

Why do many organizations choose managed Kubernetes and the shared responsibility model? This option offers the cloud’s flexibility, while still managing aspects of the environment. A managed Kubernetes cluster will have additional security configurations applied and managed by the service provider in comparison to a default self-hosted configuration.

The Center for Internet Security (CIS) has benchmarks for self-hosted Kubernetes, as well as for major cloud-provider Kubernetes offerings such as Amazon’s EKS, Azure’s AKS and Google’s GKE. In fact, the CIS guidance for securing Kubernetes is an excellent starting place.

The CIS Kubernetes Benchmark offers almost 70 recommendations for control-plane components in self-hosted Kubernetes environments. Those recommendations include:

  • Ensure proper permission and ownership of control-plane configuration files and data directories. This allows you to make sure that configuration files and data directories are configured with permissions that prevent unauthorized access or modification.
  • Configure TLS security for control-plane service communications. When you enable TLS security, communications between Kubernetes services are protected from unauthorized access or manipulation.
  • Ensure various API server admission-control plugins are either set or not set. That way, admission control plugins can limit requests to create, delete and modify objects.
  • Restrict API server authorization mode. This lets you make sure that not all requests are authorized.
  • Set API server logging parameters such as path, age and size. With this configuration, you can collect security-relevant log records for activities performed by individual users, administrators or components of the system.

The managed CIS Kubernetes Benchmarks for cloud service providers don’t include these recommendations, because they are the responsibility of the service provider. In short, when you use a managed Kubernetes offering, you don’t have to take on the task of applying these CIS recommendations.

In addition, many managed Kubernetes providers offer integrations with services such as:

  • Identity and access management, to allow administrators to manage identities and to authorize who can perform what actions on resources.
  • Key and secrets management, including the management of SSL/TLS certificates, application secrets, and more.
  • Image registry and image scanning, which offer a private registry for container images, with the ability to scan container images for vulnerabilities.
  • Software-defined load balancing for Kubernetes traffic.
  • Elastic provisioning and scalability, which gives you visibility into available capacity and the ability to increase/decrease resources in a timely manner.
  • Cluster networking, which allows cluster-node network connectivity across the zone or region, depending on cluster type.
  • Logging and monitoring, which stores logs and events related to Kubernetes resources and dependent services so they’re accessible for querying, alerting and monitoring.


Organization maturity

You should also consider your organization’s Kubernetes experience and expertise. It can take significant time and resources to maintain Kubernetes clusters and their supporting technologies. Many managed Kubernetes providers have central web interfaces for managing cluster and worker-node settings for all of your clusters. Users can focus on deploying and managing their containerized applications, as opposed to worrying about managing the underlying infrastructure. You can easily scale clusters by adding or removing worker nodes to handle workload and traffic changes. You can do this either manually or by configuring elastic scaling, which increases or decreases the number of nodes based on load. Similarly, pods and workloads can be auto-scaled horizontally as well.


Protecting Kubernetes workloads

Whether you choose self-hosted or managed Kubernetes, make sure you secure Kubernetes nodes from the running workloads. You can reduce workload privileges and access by applying security-context settings to minimize the admission of:

  • Privileged containers, which have access to all Linux Kernel capabilities and devices.
  • Containers that share the host process ID, IPC, or network namespaces, and that can inspect and interact with processes outside the container, or access traffic to and from other pods.
  • Containers with allowPrivilegeEscalation, a setting that can allow a process to obtain more rights than it started with.
  • Root containers, which allow the container process to run as root, which has an increased likelihood of container breakout.
  • Containers with added capabilities or capabilities assigned, which permits certain root actions without granting full root privileges.

For more information on securing Kubernetes resources in cloud environments watch the on-demand webinar “Kubernetes Confessions: Tune In and Get the Help You Need to Finally Put An End to Those Risky K8s Security Sins.



About the Author

Mark Beblow is a Compliance Engineer at Tenable, specializing in writing policy compliance audits. Mark has over 10 years experience in information security, having worked as a security analyst and system administrator for organizations such as BHP Billiton, Cameco and Viterra prior to joining Tenable.

Share this content on your favorite social network today!