Cloud 101CircleEventsBlog
Master CSA’s Security, Trust, Assurance, and Risk program—download the STAR Prep Kit for essential tools to enhance your assurance!

Getting Started with Kubernetes Ingress

Published 05/23/2022

Getting Started with Kubernetes Ingress

This blog was originally published by ARMO here.

Written by Ben Hirschberg, VP R&D & Co-founder, ARMO.

Kubernetes Ingress is one of today’s most important Kubernetes resources. First introduced in 2015, it achieved GA status in 2020. Its goal is to simplify and secure the routing mechanism of incoming traffic to your defined services.

Ingress allows you to expose HTTP and HTTPS from outside the cluster to your services within the cluster by leveraging traffic routing rules you define while creating the Ingress. Without Ingress, you would need to expose each service separately, most likely with a separate load balancer. As you can imagine, this would be cumbersome and expensive as your application scales and the number of services increases.

At the end of the day, you still need a load balancer or node port since you need to expose the Ingress outside of the cluster. However, you immediately benefit from reducing the number of required load balancers to, in many cases, just a single load balancer.

Moreover, when aiming to secure the traffic using SSL, the configuration can be made at different levels; for example, at the application level or load-balancer level. This could lead to disparity in the way you implement SSLs, a disparity that becomes more prominent as your application scales, resulting in increased overhead. With Ingress, SSL can be configured in the Ingress resource itself, reducing the required management overhead. These configurations are defined within the cluster and allow the configuration to exist as your other configuration files for your Kubernetes application.

Want to learn how to get started with Kubernetes Ingress? Read on.

Setting up Ingress in Two Phases

Setting up Ingress involves two phases: setting up the Ingress Controller and defining Ingress resources. This is because Ingress is implemented using supported routing solutions, such as NGINX or Traefik, which then leverage routing and security configs specified in your Ingress resource. Kubernetes documentation provides a list of supported Ingress controllers, and you may even consider a hybrid approach.

Phase 1: Deployment of Ingress Controller

This phase involves the actual deployment of the Ingress controller. What exactly you do here depends on the controller you use. In this post, we will use the NGINX Ingress Controller. The deployment process itself will also differ according to your Kubernetes cluster setup. For example, the deployment process when using Kubernetes clusters with AWS will differ from when using them with bare metal.

For the purpose of this article, we will take one of the simpler routes to demonstrate how an NGINX controller can be installed and set up by leveraging minikube. Once minikube has been installed, simply enable the NGINX controller with the command below:

minikube addons enable ingress


The advantage of leveraging minikube is that when you use the “enable” command, minikube provisions the following for you:

  • The NGINX Ingress Controller itself
  • A config map that allows you to keep the NGINX configuration options related to logging, timeouts, SSLs, and other functionalities separate
  • A basic service that is used to expose the default NGINX pod, which receives all unmapped traffic as per the rules you specify in your Ingress resource.

After executing the “enable” command, you can check on your NGINX Ingress Controller by running the following command:

kubectl get pods -n ingress-nginx


You should now be able to see a similar output, as shown in the following image.

kubectl get pods -n ingress-nginx


Such output will indicate that your NGINX Ingress Controller is running.

When setting up and actually deploying your controller, there are a few things you should keep in mind. First, be aware of the version of minikube you are using. For this article, we used minikube v1.19. An older version may require you to set up the config file and default service manually. Note that there are many ways to set up and deploy your NGINX controller, depending on how you plan to run your Kubernetes applications.

Phase 2: Defining the Ingress Resource

After you have successfully configured your Ingress controller, you can finally define your Ingress resources. Similar to defining other Kubernetes resources, you can define the Ingress resource in a YAML file.

The latest apiVersion for the Kind Ingress resource is currently networking.k8s.io/v1. It is recommended to always use the latest stable version that is available. Therefore, our basic ingress.yaml file, with two paths is as follows:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: armo-ingress-example
spec:
  ingressClassName: nginx
  rules:
  - host: armoexample.com
    http:
      paths:
      - path: /kubernetes
        pathType: Prefix
        backend:
          service:
            name: armoservice1
            port:
              number: 80
      - path: /kubernetes/security/
        pathType: Prefix
        backend:
          service:
            name: armoservice2
            port:
              number: 80


The main component of an Ingress resource is the rules that you define under the resource ‘spec’. After all, that is the whole point of the Ingress controller: to route and secure traffic according to the rules defined. In the example YAML file above, you see the routing of two paths:

/kubernetes and /kubernetes/security/ to armoservice1 and armoservice2.

Another thing to notice is the ‘pathType’ field. This allows you to define how your Ingress rules are going to read the URL paths. For example, an ‘Exact’ value would match the URL as defined, taking into account case sensitivity. Refer to the documentation to learn about the other path types and other fields you’ll need to configure to tailor your Ingress resource for your requirements. Finally, after defining your Ingress resource, all you need to do is deploy your application, along with all the services and Ingress components you have built.

Making Ingress Work for You

As already mentioned earlier, there are several ways to set up your Ingress. You should really consider what you want to achieve with Ingress, and a lot of this consideration goes into picking the right Ingress controller. The issues Ingress solves for you are:

  • Load balancing (HTTP2, HTTP/HTTPS, SSL/TLS termination, TCP/UDP, WebSocket, and gRPC)
  • Traffic control (rate limiting, circuit breaking, and active health checks)
  • Traffic splitting (debug routing, A/B testing, canary deployments, and blue‑green deployments)

All the major available Ingress controllers solve all these basic issues, but there are also some overheads to consider while setting up your Ingress.

API Gateway Substitute

Considering what you can do with Ingress, it is easy to think of an Ingress controller as an API gateway, reducing the need for a separate API gateway resource in your cloud architecture. When selecting an Ingress controller, consider what functionality it provides to achieve API gateway responsibilities such as TLS termination and client authentication.

Think Secrets

One of the greatest benefits of Ingress is being able to secure the traffic of your application. In the Ingress resource, you can define your TLS private key and certificates by leveraging the Kubernetes Secrets resource, instead of directly defining your TLS details in the Ingress resource.

Choosing Ingress Patterns

There are three types of Ingress configurations, and choosing the right one depends on how you see east-west and north-south traffic being managed within your application:

  • Single service: As the name suggests, a single service is exposed, which in many cases, is your application’s default service. You can choose this type of Ingress configuration when you want to keep the traffic ingestion completely separate for your default path, as compared to other areas of your application.
  • Simple fanout: Similar to the example shown in code example in above sections, multiple services can be reached through a single entry point, where the actual service eventually reached is managed by the paths defined in the rules.
  • Name-based virtual routing: This allows the routing of traffic from a single entry point to multiple hosts, reducing the need for you to have separate load balancers for each host.

Conclusion

Ingress is a very powerful component of any Kubernetes application. With the ability to reduce the number of load balancers required to standardize the way security can be considered, Ingress should definitely be considered when working with Kubernetes. However, getting started with Ingress, just like any other Kubernetes concept, may be overwhelming.

Share this content on your favorite social network today!