Kubernetes 1.22 – What’s new?
Published 09/06/2021
This blog was originally published by Sysdig here.
Written by Víctor Jiménez Cerrada, Sysdig.
Kubernetes 1.22 was released in early August, and it comes packed with novelties! Where do we begin?
This release brings 56 enhancements, an increase from 50 in Kubernetes 1.21 and 43 in Kubernetes 1.20. Of those 56 enhancements, 13 are graduating to Stable, a whopping 24 are existing features that keep improving, and 16 are completely new.
It's great to see so many new features focusing on security, like the replacement for the Pod Security Policies, a rootless mode, and enabling Seccomp by default.
Also, watch out for all the deprecations and removals in this version!
There is plenty to talk about, so let's get started with what’s new in Kubernetes 1.22.
Kubernetes 1.22 – Editor’s pick:
These are the features that look most exciting to us in this release (ymmv):
#2579 Pod Security Policy replacement
After being deprecated in Kubernetes 1.21, we knew a replacement for Pod Security Policies was on the way, but we didn't know what it would look like. Now, we are happy to learn that it will be an Admission Controller, reusing part of the existing infrastructure.
#2033 Rootless mode containers
Not running containers as root is the No. 1 container security best practice. It's reassuring that this measure is being taken to the extreme, allowing us to run the entire Kubernetes stack in the user space. This is really gonna make Kubernetes more secure.
#2413 Seccomp by default
If one thing is clear after this Kubernetes release, it’s the shift to security first. Adding this extra layer of security by default will render many potential exploits pointless. Steps like this will absolutely change how people see Kubernetes.
#2400 Node swap support
One of those tiny details we like to see on every Kubernetes release, it won't make headlines, but will make lives much easier. Swap was a given for every developer. By being supported, there's one thing less to worry about when using Kubernetes.
#2254 #2570 Cgroupsv2
Same as with swap, it's great to see support for more native Linux features. In this case, Cgroupsv2 is enabling more options for Memory QoS, which (in Kubernetes 1.22) will help avoid performance throttling of workloads running in Kubernetes.
Deprecations
Beta API and feature removals
As Kat Cosgrove warned us, a few beta APIs and features have been removed in Kubernetes 1.22, including:
- Ingress
- CustomResourceDefinition
- ValidatingWebhookConfiguration
- MutatingWebhookConfiguration
- CertificateSigningRequest
Not deprecated, but removed. That means that you won't be able to re-enable them with a feature flag. If you are using them, you'll have to migrate to their stable versions.
Check in the Kubernetes blog for the full list of removals and steps to migrate. And keep the deprecated API migration guide close for the future.
#1558 Streaming proxy redirects deprecation
Feature group: node
The StreamingProxyRedirects feature was initially developed to alleviate request overhead driven by the implementation of the Container Runtime Interface. The rationale was that by bypassing the kubelet, performance wouldn't take a big hit.
However, with all requests centralized in the apiserver, those theoretical improvements weren't as important. Also, some security concerns required the implementation of the ValidateProxyRedirects feature.
After being initially deprecated in 1.20, the feature gates have been disabled in 1.22. In Kubernetes 1.24, these features will be removed and the only way to configure container streaming will be by using the local redirect with Kubelet proxy approach. Check the full details in the KEP.
#536 Topology API
Feature group: network
In the last release, the #2433 Topology Aware Hints enhancement was created to replace the ServiceTopology feature gate that was introduced in Kubernetes 1.17.
#281 DynamicKubeletConfig deprecation
Feature group: node
After being in beta since Kubernetes 1.11, the Kubernetes team has decided to deprecate DynamicKubeletConfig instead of continuing its development.
Kubernetes 1.22 API
#555 Server-side Apply
Stage: Graduating to Stable
Feature group: api-machinery
Introduced in: 1.14
This feature aims to move the logic away from kubectl apply to the apiserver, fixing most of the current workflow pitfalls and also making the operation accessible directly from the API (for example using curl), without strictly requiring kubectl or a Golang implementation.
#1693 Warnings mechanism
Stage: Graduating to Stable
Feature group: api-machinery
Introduced in: 1.19
The API server includes a warning header with deprecation information. This includes when an API was introduced, when it will be deprecated, and when it will be removed.
#2161 Immutable label selectors for all namespaces
Stage: Graduating to Stable
Feature group: api-machinery
Introduced in: 1.21
A new immutable label kubernetes.io/metadata.name has been added to all namespaces, where its value is the namespace name. This label can be used with any namespace selector, like in the previously mentioned NetworkPolicy objects.
#1040 Priority and Fairness for API Server Requests
Stage: Graduating to Beta
Feature group: api-machinery
Introduced in: 1.18
The APIPriorityAndFairness feature gate enables a new max-in-flight request handler in the API server. By defining different types of requests with FlowSchema objects and assigning them resources with RequestPriority objects, you can ensure the Kubernetes API server will be responsive for admin and maintenance tasks during high loads.
Apps in Kubernetes 1.22
#2307 Job tracking without lingering Pods
Stage: Alpha
Feature group: apps
With this enhancement, Jobs will be able to remove completed pods earlier, freeing resources in the cluster.
The current Job controller tracks completed Jobs by keeping Pods in a lingering state after they finish. Only when the Job is finished are the Pods removed, and the resources freed.
With the new approach, the Job controller will keep track of the completed jobs while allowing the completed Pods to be removed.
Check the KEP to learn the implementation details.
#2599 Add minReadySeconds to Statefulsets
Stage: Alpha
Feature group: apps
This enhancement brings to StatefulSets the optional minReadySeconds field that is already available on Deployments, DaemonSets, ReplicasSets, and Replication Controllers.
If declared, a newly created Pod won't be considered available until their containers stay ready without crashing for the specified number of seconds.
#19 CronJobs
Stage: Graduating to Stable
Feature group: apps
Introduced in: 1.4
Introduced in Kubernetes 1.4 and in beta since 1.8, CronJobs are finally on the road to becoming Stable.
CronJobs runs periodic tasks in a Kubernetes cluster, similar to cron on UNIX-like systems.
#85 PodDisruptionBudget eviction
Stage: Graduating to Stable
Feature group: apps
Introduced in: 1.4
Introduced in Kubernetes 1.4 and in beta since 1.5, PodDisruptionBudget finally graduates to Stable.
With this API, you can define a minAvailable number of replicas for a Pod. If you try to voluntarily disrupt a Pod in a way that violates minAvailable value, it won't be deleted.
While in Kubernetes 1.21 we saw the graduation of PodDisruptionBudget to GA, in Kubernetes 1.22 we see the graduation of the Eviction subresource:
{ "apiVersion": "policy/v1", "kind": "Eviction", "metadata": { "name": "quux", "namespace": "default" } }
#1591 Graduate DaemonSet maxSurge to beta
Stage: Graduating to Beta
Feature group: apps
When performing a rolling update, this enhancement allows specifying how many new Pods will be created to replace the old ones.
This is done via the optional spec.strategy.rollingUpdate.maxSurge field. By assigning an absolute number, you can tell the maximum number of Pods that can be created over the desired number of Pods. The value can also be a percentage of the desired Pods. The default value, to match the previous behavior, is 25 percent.
#2185 Graduate LogarithmicScaleDown to Beta
Stage: Graduating to Beta
Feature group: apps
Introduced in: 1.21
When the LogarithmicScaleDown feature gate is enabled, a semi-random selection of Pods will be used to downscale ReplicaSets based on logarithmic bucketing of pod timestamps.
#2255 Graduate PodDeletionCost to Beta
Stage: Graduating to Beta
Feature group: apps
Introduced in: 1.21
You can now annotate Pods with controller.kubernetes.io/pod-deletion-cost=10, with 0 being the default value. When scaling down, a best effort will be made to remove Pods with a lower deletion cost.
#2214 Indexed Job semantics in Job API
Stage: Graduating to Beta
Feature group: apps
Introduced in: 1.21
Indexed jobs were introduced in Kubernetes 1.21 to make it easier to schedule highly parallelizable Jobs.
This enhancement adds completion indexes into the Pods of a Job with fixed completion count, to support running embarrassingly parallel programs.
This way, it is possible to pass the job index to the Pods via environment variables, or even create indexed jobs where pods can address each other by a hostname built from the index:
[...] spec: subdomain: my-job-svc containers: - name: task image: registry.example.com/processing-image command: ["./process", "--index", "$JOB_COMPLETION_INDEX", "--hosts-pattern", "my-job-{{.id}}.my-job-svc"]
#2232 add suspend field to Jobs API
Stage: Graduating to Beta
Feature group: apps
Introduced in: 1.21
Since Kubernetes 1.21, Jobs can be temporarily suspended by setting the .spec.suspend field of the job to true, and resumed later by setting it back to false.
Kubernetes 1.22 Authentication
#2579 Pod Security Policy replacement
Stage: Alpha
Feature group: auth
Introduced in: 1.21
A new PodSecurity admission controller is available to replace the Pod Security Policies deprecated in Kubernetes 1.21.
This new admission controller will be able to enforce Pod Security Standards by namespace. The enforcement can be done at three levels:
- enforce: Policy violations will cause the pod to be rejected.
- audit: Policy violations will trigger the addition of an audit annotation, but are otherwise allowed.
- warn: Policy violations will trigger a user-facing warning, but are otherwise allowed.
It'll be possible to configure this via the AdmissionConfiguration file:
apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: PodSecurity configuration: defaults: # Defaults applied when a mode label is not set. enforce: <default enforce policy level> enforce-version: <default enforce policy version> audit: <default audit policy level> audit-version: <default audit policy version> warn: <default warn policy level> warn-version: <default warn policy version> exemptions: usernames: [ <array of authenticated usernames to exempt> ] runtimeClassNames: [ <array of runtime class names to exempt> ] namespaces: [ <array of namespaces to exempt> ]
To learn more about the rationale behind this enhancement, and possible new developments, check the KEP.
#541 External client-go credential providers
Stage: Graduating to Stable
Feature group: auth
Introduced in: 1.10
This enhancement allows Go clients to authenticate using external credential providers, like Key Management Systems (KMS), Trusted Platform Modules (TPM), or Hardware Security Modules (HSM).
Those devices are already used to authenticate against other services, are easier to rotate, and are more secure, as they don't exist as files on the disk.
#542 Bound Service Account Token Volumes
Stage: Graduating to Stable
Feature group: auth
Introduced in: 1.20
The current JSON Web Tokens (JWT) that workloads use to authenticate against the API have some security issues. This enhancement comprises the work to create a more secure API for JWT.
#2784 CSR Duration
Stage: Graduating to Beta
Feature group: auth
Up until now, clients of the Certificates API weren't able to request a specific duration for an issued certificate.
This duration defaulted to one year for every new certificate, which may be too long for some use cases.
A new ExpirationSeconds field has been added to CertificateSigningRequestSpec, which accepts a minimum value of 600 seconds (10 minutes).
Network in Kubernetes 1.22
#1669 kube-proxy handling terminating endpoints
Stage: Alpha
Feature group: network
This enhancement approaches the edge case where Services with externalTrafficPolicy=Local will drop all traffic from a load balancer if the number of endpoints changes to 0. This behavior makes zero downtime rolling updates impossible for such services.
The implemented solution is to send all external traffic on the affected nodes to both ready and not ready terminating endpoints (preferring the ready ones).
To learn more about this edge case and the implementation details, check the KEP.
#2595 Expanded DNS configuration
Stage: Alpha
Feature group: network
With this enhancement, Kubernetes allows more DNS search paths, and a longer list of DNS search paths, to keep up with recent DNS resolvers.
After enabling the MaxDNSSearchPathsExpanded feature gate, two DNS configuration limits are changed:
- MaxDNSSearchPaths jumps from 6 to 32.
- MaxDNSSearchListChars increases from 256 to 2048.
#752 EndpointSlice API
Stage: Graduating to Stable
Feature group: network
Introduced in: 1.20
The new EndpointSlice API will split endpoints into several Endpoint Slice resources. This solves many problems in the current API that are related to big Endpoints objects. This new API is also designed to support other future features, like multiple IPs per pod.
Noteworthy in Kubernetes 1.22 is that if an Endpoints resource has more than 1000 endpoints, the cluster annotates them with endpoints.kubernetes.io/over-capacity: truncated, where before it was annotated with endpoints.kubernetes.io/over-capacity: warning.
#1507 AppProtocol field in Services and Endpoints
Stage: Graduating to Stable
Feature group: network
Introduced in: 1.18
After being stable since 1.20, the ServiceAppProtocol feature gate has now been removed.
#1864 Service Disabling LB Node Ports
Stage: Graduating to Beta
Feature group: network
Introduced in: 1.20
Some implementations of the LoadBalancer API do not consume the node ports automatically allocated by Kubernetes, like MetalLB or kube-router. However, the API requires a port to be defined and allocated, even if it's not being used.
When the field allocateLoadBalancerNodePort in Service.Spec is set to false, it will stop allocating new node ports.
#1959 Service LoadBalancer Class
Stage: Graduating to Beta
Feature group: network
Introduced in: 1.21
This enhancement allows you to leverage multiple Service Type=LoadBalancer implementations in a cluster.
#2079 Network Policy EndPort
Stage: Graduating to Beta
Feature group: network
Introduced in: 1.21
This enhancement will allow you to define all ports in a NetworkPolicy as a range:
spec: egress: - ports: - protocol: TCP port: 32000 endPort: 32768
#2086 Service InternalTrafficPolicy
Stage: Graduating to Beta
Feature group: network
Introduced in: 1.21
You can now set the spec.trafficPolicy field on Service objects to optimize your cluster traffic:
- With Cluster, the routing will behave as usual.
- When set to Topology, it will use the topology-aware routing.
- With PreferLocal, it will redirect traffic to services on the same node.
- With Local, it will only send traffic to services on the same node.
#2365 Namespace Scoped Ingress Class Parameters
Stage: Graduating to Beta
Feature group: network
Introduced in: 1.21
With this enhancement, you can now specify parameters for an IngressClass with a Namespace scope.
Kubernetes 1.22 nodes
#2033 Rootless mode containers
Stage: Alpha
Feature group: node
Not running containers as root is the No. 1 container security best practice.
With this enhancement, Kubernetes goes a step further, allowing you to run the whole Kubernetes stack as a non-root user. This way, if your cluster is compromised, attackers will have a bad time accessing the rest of your infrastructure.
You can run Kubernetes in rootless mode by enabling the KubeletInUserNamespace feature gate and following these instructions.
However, keep in mind that there are plenty of caveats you'll have to address.
#2254 Cgroupsv2
Stage: Alpha
Feature group: node
Cgroups is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. It's v2 API was declared stable more than two years ago, and many Linux distributions are already using it by default.
This enhancement covers the work done to make Kubernetes compatible with Cgroups v2, starting with the configuration files.
Check the new configuration values, as there will be some changes in the ranges of the values. For example cpu.weight values will change from [2-262144] to [1-10000].
This enhancement doesn't cover enabling the new features in v2 that aren't available in v1. It also doesn't cover the deprecation of v1.
#2400 Node swap support
Stage: Alpha
Feature group: node
Swap (disk memory in Linux) is not the fastest. However, there are several workloads like Java and Node applications that benefit from it.
This enhancement enables Kubernetes workloads to use swap. For now, the configuration is global for the whole node, and cannot be configured per workload. To use it:
- Provision swap on the target worker nodes.
- Enable the NodeMemorySwap feature flag on the kubelet.
- Set the --fail-on-swap flag to false.
- Optionally, allow Kubernetes workloads to use swap by setting MemorySwap.SwapBehavior=UnlimitedSwap in the kubelet config.
Check further implementation details in the KEP.
#2413 Seccomp by default
Stage: Alpha
Feature group: node
Kubernetes currently allows you to increase the security of your containers, executing them using a Seccomp profile.
This enhancement will enable this option by default, helping prevent CVEs and zero-day vulnerabilities. You can enable this behavior with the SeccompDefault, which will turn the existing RuntimeDefault profile into the default for any container.
#2570 Memory QoS with cgroupsv2
Stage: Alpha
Feature group: node
Now that Cgroupsv2 is supported, support for its new features has started.
It’s now possible to configure Memory QoS with the following options:
- memory.min: Minimum amount of memory the cgroup must always retain.
- memory.max: The memory usage hard limit, acting as the final protection mechanism.
- memory.low: The best-effort memory protection, a "soft guarantee" that if the cgroup and all its descendants are below this threshold, the cgroup's memory won't be reclaimed unless memory can’t be reclaimed from any unprotected cgroups. Not yet considered for now.
- memory.high: If a cgroup's memory use goes over the high boundary specified here, the cgroup’s processes are throttled and put under heavy reclaim pressure.
#2625 New CPU Manager Policies
Stage: Alpha
Feature group: node
This enhancement allows the CPU Manager to assign exclusive workloads to specific CPU cores. Isolating workloads per CPU core avoids context and cache switching, which is crucial for latency-sensitive applications.
A new --cpu-manager-policy-options option is available on the CPU Manager. When set to full-pcups-only=true, the node will isolate the workloads per physical CPU cores. Note that it will only accept workloads that use entire CPUs.
#1539 HugePageStorageMediumSize
Stage: Graduating to Stable
Feature group: node
Introduced in: 1.18
These two enhancements added to the HugePages feature in Kubernetes 1.18 are now stable.
First, the pods are now allowed to request HugePages of different sizes.
And second, container isolation of HugePages has been put in place to solve an issue where a pod could use more memory than requested, ending up in resource starvation.
#1797 Set pod FQDN
Stage: Graduating to Stable
Feature group: node
Introduced in: 1.19
Now, it’s possible to set a pod’s hostname to its Fully Qualified Domain Name (FQDN), increasing the interoperability of Kubernetes with legacy applications.
After setting hostnameFQDN: true, running uname -n inside a Pod returns foo.test.bar.svc.cluster.local instead of just foo.
You can read more details in the enhancement proposal.
#277 Ephemeral Containers
Stage: Graduating to Beta
Feature group: node
Introduced in: 1.16
Ephemeral containers are a great way to debug running pods. Although you can't add regular containers to a pod after creation, you can run ephemeral containers with kubectl debug.
#1769 Memory Manager
Stage: Graduating to Beta
Feature group: node
Introduced in: 1.21
A new Memory Manager component is available in kubelet to guarantee memory and hugepages allocation for pods. It will only act on Pods in the guaranteed QoS class.
#1967 Support to size memory backed volumes
Stage: Graduating to Beta
Feature group: node
Introduced in: 1.20
When a Pod defines a memory-backed empty dir volume (e.g., tmpfs), not all hosts size this volume equally. For example, a Linux host sizes it to 50 percent of the memory on the host. This new enhancement will size volumes not only with the node allocatable memory in mind, but also with the pod allocatable memory and the emptyDir.sizeLimit field.
#2238 Configurable grace period for probes
Stage: Graduating to Beta
Feature group: node
Introduced in: 1.21
This enhancement introduces a second terminationGracePeriodSeconds field, inside the livenessProbe object, to differentiate two situations: How much should Kubernetes wait to kill a container under regular circumstances, and when is the kill due to a failed livenessProbe?
Scheduling in Kubernetes 1.22
#785 Scheduler Component Config API
Stage: Graduating to Beta
Feature group: scheduling
Introduced in: 1.19
ComponentConfig is an ongoing effort to make component configuration more dynamic and directly reachable through the Kubernetes API.
#1923 Honor Nominated node during the new scheduling cycle
Stage: Graduating to Beta
Feature group: scheduling
Introduced in: 1.21
This feature lets you define a preferred node with the .status.nominatedNodeName field inside a Pod to speed up the scheduling process in large clusters.
#2249 Namespace Selector to Beta
Stage: Graduating to Beta
Feature group: scheduling
Introduced in: 1.21
By defining node affinity in a deployment, you can constrain which nodes your pod will be scheduled on. For example, deploy on nodes that are already running Pods with a label-value for an example-label.
This enhancement adds a namespaceSelector field so you can specify the namespaces by their labels, rather than their names. With this field, you can dynamically define the set of namespaces.
#2458 A single scoring plugin for node resources
Stage: Graduating to Beta
Feature group: scheduling
Three score plugins of the default scheduler (NodeResourcesLeastAllocated, NodeResourcesMostAllocated and RequestedToCapacityRatio) implement mutually exclusive strategies for preferred resource allocation.
This enhancement simplifies the scheduler by deprecating those plugins and combining them under a single NodeResourcesFit plugin, using a ScoringStrategy property that can be set to LeastAllocated (default), MostAllocated, and RequestedToCapacityRatio.
Kubernetes 1.22 storage
#2317 Delegate FSGroup to CSI Driver instead of Kubelet
Stage: Alpha
Feature group: storage
Before a CSI volume is bind mounted inside a container, Kubernetes modifies the volume ownership via fsGroup. For most volume plugins, kubelet does so by recursively chowning and chmoding the files and directories inside a volume. However, chown and chmod are unix primitives, so they are not available to some CSI drivers, like AzureFile.
This enhancement proposes providing the CSI driver with the fsgroup of the pods as an explicit field, so it can be the CSI driver applying this natively on mount time.
This behavior can be enabled with the DelegateFSGroupToCSIDriver feature gate, for drivers supporting the VOLUME_MOUNT_GROUP NodeServiceCapability. The fsGroup should be specified in the securityContext.
#2485 New RWO access mode
Stage: Alpha
Feature group: storage
With this enhancement, it’s possible to access PersistenVolumes in a ReadWriteOncePod mode, restricting access to a single pod on a single node.
While the existing ReadWriteOnce, access mode restricts access to a single node, but allows simultaneous access from many pods on that node.
The new mode is ideal for maintenance tasks in PersistenVolumes.
#2047 CSIServiceAccountToken
Stage: Graduating to Stable
Feature group: storage
Introduced in: 1.20
With this enhancement, CSI drivers will be able to request the service account tokens from Kubelet to the NodePublishVolume function. Kubelet will also be able to limit what tokens are available to which driver. And finally, the driver will be able to re-execute NodePublishVolume to remount the volume by setting RequiresRepublish to true.
This last feature will come in handy when the mounted volumes can expire and need a re-login. For example, a secrets vault.
#1495 Volume Populator DataSource
Stage: Graduating to Beta
Feature group: storage
Introduced in: 1.18
This enhancement establishes the foundations that will allow users to create pre-populated volumes. For example, pre-populating a disk for a virtual machine with an OS image, or enabling data backup and restore.
To accomplish this, the current validations on the DataSource field of persistent volumes will be lifted, allowing it to set arbitrary objects as values. Implementation details on how to populate volumes are delegated to purpose-built controllers.
Windows support in Kubernetes 1.22
#1981 Windows Privileged Containers
Stage: Alpha
Feature group: windows
This enhancement brings the privileged containers feature available in Linux to Windows hosts.
Privileged containers have access to the host, as if they were running directly on it. Although they are not recommended for most of the workloads, they are quite useful for administration, security, and monitoring purposes.
If your cluster has the WindowsHostProcessContainers feature enabled, you can create a Windows HostProcess pod by setting the windowsOptions.hostProcess flag on the security context of the pod spec. All containers in these pods must run as Windows HostProcess containers.
#1122 Support CSI Plugins in Windows
Stage: Graduating to Stable
Feature group: windows
Introduced in: 1.16
Container Storage Interface plugins were created to allow the development of third-party storage volume systems.
Since Kubernetes 1.16, Windows nodes are able to use the existing CSI plugins. Now this feature is Stable.
Other enhancements in Kubernetes 1.22
#647 API Server Tracing
Stage: Alpha
Feature group: instrumentation
This enhancement improves the API Server to allow tracing requests using OpenTelemetry libraries, and the OpenTelemetry format.
You can enable the tracing via the APIServerTracing feature gate and by launching the apiserver with --tracing-config-file=<path-to-config>, where the config file would look like:
apiVersion: apiserver.config.k8s.io/v1alpha1 kind: TracingConfiguration # default value #endpoint: localhost:4317 samplingRatePerMillion: 100
#2568 Run control-plane as non-root in kubeadm
Stage: Alpha
Feature group: cluster-lifecycle
Related to #2033 Rootless mode containers, this enhancement summarizes the work to be able to run the control plane in kubeadm as non-root.
#970 kubeadm: graduate the kubeadm configuration
Stage: Graduating to Beta
Feature group: cluster-lifecycle
Introduced in: 1.15
Over time, the number of options to configure the creation of a Kubernetes cluster has greatly increased in the kubeadm config file, while the number of CLI parameters has been kept the same. As a result, the config file is the only way to create a cluster with several specific use cases.
The goal of this feature is to redesign how the config is persisted, improving the current version and providing a better support for high availability clusters using substructures instead of a single flat file with all the options.
#2436 Leader Migration for Cloud Controller Managers
Stage: Graduating to Beta
Feature group: cloud-provider
Introduced in: 1.21
As we've been discussing for a while, there is an active effort to move code specific to cloud providers outside of the Kubernetes core code (from in-tree to out-of-tree).
This enhancement establishes a migration process for sters with strict requirements on control plane availability.
#859 Include kubectl command metadata in http request headers
Stage: Graduating to Beta
Feature group: cli
Introduced in: 1.21
From now on, kubectl will include additional HTTP headers in the requests to the API server. By knowing what kubectl command triggered a given request, administrators will have useful information to aid in troubleshooting and enforcing best practices.
That’s all for Kubernetes 1.22, folks! Exciting as always; get ready to upgrade your clusters if you are intending to use any of these features.
Related Articles:
How Cloud-Native Architectures Reshape Security: SOC2 and Secrets Management
Published: 11/22/2024
The Lost Art of Visibility, in the World of Clouds
Published: 11/20/2024
Group-Based Permissions and IGA Shortcomings in the Cloud
Published: 11/18/2024
9 Tips to Simplify and Improve Unstructured Data Security
Published: 11/18/2024