The New Kubernetes Gateway API and Its Use Cases
Originally published by ARMO here.
Written by Leonid Sandler, CTO & Co-founder, ARMO.
Despite being a large open-source and complex project, Kubernetes keeps on evolving at an impressive pace. Being at the center of various platforms and solutions, the biggest challenge for the Kubernetes project is to remain vendor-neutral. This is the reason the community has come up with Kubernetes Gateway API.
Gateway API is a new project by the SIG-Network community designed to enhance service networking through consistent and extensible interfaces that multiple vendors can implement to offer broader choices to the development community.
In this article, we’ll look into Gateway API, how it is different from Ingress, and what we can do with it.
How Gateway API Differs from Ingress
Usually, after deploying a Kubernetes application, you need to expose it to the end users. This is typically done using Ingress Controller constructs for north-south traffic. The Ingress API object defines the routes and mapping of the external traffic to Kubernetes services; it also offers load balancing, SSL termination, and name-based virtual hosting.
Many off-the-shelf controllers such as NGINX Ingress controller and HAProxy implement the Ingress interface. These controllers have differentiated themselves from each other by providing more features such as advanced load balancing via proprietary extensions. The current Ingress API scope is very limited to ensure better portability in the ecosystem.
Gateway API is an evolution of Ingress that provides advanced features natively by extending the API definitions. Some of these features were provided by individual Ingress vendors as private extensions, but these implementations were not aligned with each other. Now, with Gateway API, many such capabilities will be implemented by multiple vendors according to a single specification so that users will have a choice of different implementations.
Important additions by Gateway API include HTTP and TCP routes, traffic splitting, and a role-oriented approach that allows cluster admins and developers to focus on the setting relevant to their responsibilities.
Kubernetes Gateway API Concepts
The SIG-Network community has come up with the following design goals for Gateway API to improve upon the Ingress resource:
- Role-oriented: API resources to manage the Kubernetes service networking should model the different organizational roles that handle their scope of resources, as shown below:
- Portable: Like Ingress, these APIs should also be portable and have a universal specification.
- Expressive: API natively supports core functionalities such as header-based routing, traffic weighting, and other advanced features that were only possible in Ingress through custom annotations.
- Extensible: Every layer of the API is extensible via custom resources, enabling granular customization of the API’s structure.
Other notable capabilities of Kubernetes API Gateway are:
- GatewayClasses: These formalize the implementation types for load balancing, allowing users to clearly grasp the various capabilities of the Kubernetes resource model.
- Shared gateways and cross-namespace support: These allow you to create separation among teams per their responsibilities. We will discuss this further in the next section.
- Typed routes and typed backends: These offer support for HTTPRoute, TCPRoute, TLS, UDPRoute, etc. to provide coverage for all protocols.
Now, the question is, what can you do with Gateway API?
Advanced Traffic Routing and Progressive Delivery
One of the most important features missing in Ingress is advanced traffic routing. Up until now, this was resolved by way of a service mesh, which made it complex and tightly coupled with the mesh implementation.
Below, we cover some of the advanced routing scenarios that Gateway API provides natively so you no longer need a mesh.
This is the simplest way to expose the service using the Gateway and Route, managed by the same owner. The load balancer is managed by the Gateway controller, which sends all arriving traffic coming to the service. This gives autonomy to the service owners and allows them to expose the service as they want, a model that is similar to Ingress:Figure 2: Simple Gateway pattern where traffic flows via a load balancer. (Source: Kubernetes Gateway API)
HTTPRoute allows you to route the traffic to multiple services based on filters. In the example below, there are two routes attached to the same gateway.
- HTTPRoute foo-route where all the traffic that matches the hostname foo.example.com and /login is sent to foo-svc.
- HTTPRoute bar-route where all the traffic from host bar.example.com/* is sent to bar-svc, and traffic with the header env: canary is sent to the bar-svc-canary service.
This model allows you to introduce a progressive delivery strategy:Figure 3: HTTP routing with multiple paths and header-based routing. (Source: Kubernetes Gateway API )
HTTP Traffic Splitting
In this model, you can have weighted traffic routing, which you can combine with A/B or canary strategies to achieve complex rollouts in a simple way. In the above diagram, the HTTPRoute is splitting the traffic 90/10 and sending it to the foo-v1 service and foo-v2 service, respectively.
Here, the weight is not in the percentage. Instead, the sum of all weights within the single route is the denominator of all the backends. This allows for a gradual rollout of the newer version:Figure 4: HTTP traffic splitting example (Source: Kubernetes Gateway API)
You can also achieve header-based routing by having the same weight but a filter on the header. A canary route with header traffic=test is sent to the new service foo-v2 for testing, thus allowing for safer releases without impacting v1 traffic:Figure 5: Header-based routing (Source: Kubernetes Gateway API)
In the cross-namespace routing, you can share the gateway(s) and have team-based control on their routes. For example, in a large setup, an infrastructure team may be responsible for managing the shared gateway, while individual teams manage their routes. This allows for clear separation of duties among the users:Figure 6: Cross-namespace routing using a shared gateway model (Source: Kubernetes Gateway API)
In the table below, we can see the different roles and responsibilities of the owner for each resource type. For example, an infrastructure provider may have access to all the layers, but an application developer is limited to owning only routes to their service:
|Application Admins||No||In specific namespaces||In specific namespaces|
|Application Developers||No||No||In specific namespaces|
Gateway API supports TLS configuration at various points in the network path between the client and service—for upstream and downstream independently. Depending on the listener configuration, various TLS modes and route types are possible; support for cert-manager integration is also available:Figure 7: TLS configuration for both upstream and downstream (Source: Kubernetes Gateway API)
The following table from the official documentation provides clarification as to when an object should be used and the TLS support provided by each:
|Object||OSI Layer||Routing Discriminator||TLS Support||Purpose|
|HTTPRoute||Layer 7||Anything in HTTP Protocol||Terminated only||HTTP and HTTPS routing|
|TLSRoute||Between Layer 7 and Layer 4||SNI or other TLS properties||Passthrough or terminated||Routing of TLS protocols including HTTPS where inspection of the HTTP stream is not required|
|TCPRoute||Layer 4||Destination port||Passthrough or terminated||Allows for forwarding of a TCP stream from the listener to the backends|
|UDPRoute||Layer 4||Destination port||None||Allows for forwarding of a UDP stream from the listener to the backends|
You can implement numerous protocols with Gateway API, including support for TCPRoute. The listeners under the gateway should have protocol: TCP in the configuration to enable TCP routing and allow you to manage TCP traffic.
Integration with Progressive Delivery Tools
Combined with the various advanced traffic routing options, API Gateway currently provides integration with Flagger—a progressive delivery tool for advanced deployment strategies such as A/B, blue-green, and canary. It works with all implementations of Gateway API.
The main idea of introducing Gateway API was to bring consistency across the Ingress capabilities offered by various solutions. This also makes it portable, so when you move the workload to a different provider or write a multi-cloud solution, it will work the same way without requiring a significant amount of change in the specifications.
Kubernetes Gateway API Demo
There are various vendors working to introduce Gateway API implementations such as Contour, Emissary-Ingress, GKE, and Traefik. Just keep in mind that most of them are in beta and not really production-ready.
Kubernetes Gateway API has evolved to provide expressive, portable, and extensible API specs for implementers such as infrastructure providers, cluster operators, and application developers. Although not a complete replacement for Ingress as of now, you should aim to use Gateway API wherever possible, as it does provide more options for developers without introducing a lot of annotations or non-portable changes.
Furthermore, all the projects in the service mesh and Ingress controller space have implemented Gateway API along with tools like Flagger and cert-manager—and you should expect more to jump on board soon as it becomes more popular in the CNCF ecosystem. Most of the implementations discussed above are in beta but are expected to be GA soon.
Sign up to receive CSA's latest blogs
This list receives 1-2 emails a month.