Tools: Kubernetes Networking Evolution: From Pods to Gateway API

Tools: Kubernetes Networking Evolution: From Pods to Gateway API

Source: Dev.to

Containers and Kubernetes are all the rage, but if you are new, you might still be wondering how Kubernetes actually connects your applications to the outside world and across your cluster. This article walks through that progression, from pods, services, ingress, and the modern Gateway API, in a way that maps to how real clusters handle traffic. Kubernetes is an open-source platform for orchestrating containers at scale. It handles scheduling, self-healing, scaling, and (very importantly) networking for distributed applications. It is what teams use to deploy microservices in production. In Kubernetes, your running workloads live in pods: the smallest deployable unit. If there’s one thing to know about pods: they are ephemeral. Kubernetes can create, reschedule, or replace them at any time. That means their IP addresses can (and will) change. This isn’t a bug; it’s a feature that enables resilience. But it also introduces a networking challenge. Pods and the Need for Stable Networking Pods can come and go, so how do applications talk to each other reliably? If an application pod is replaced, its IP changes and other services shouldn’t break. Kubernetes solves this with stable Service objects. A Service acts as a consistent endpoint for a group of pods. It gets a stable IP (called a ClusterIP) and can load-balance traffic to healthy pods behind it. This lets your services discover and talk to each other regardless of pod churn. But there’s one big limitation: Kubernetes Services are in-cluster only. That means they help internal communication, but they don’t expose your application to the outside world on their own. Ingress: The Classic North-South Gateway
To expose HTTP(S) services outside the cluster, Kubernetes introduced Ingress. An Ingress resource contains rules for routing external HTTP(S) traffic into your cluster. For example: Ingress comes with a controller (like NGINX, Traefik, or cloud-provider ingress controllers) that implements these rules and actually handles traffic. Ingress was a huge step forward: it standardised the way many applications are exposed externally without resorting to ServiceType LoadBalancer for every service. But it also had limitations: These shortcomings became clearer as Kubernetes adoption grew. Enter Gateway API: The Next Generation The Gateway API is a family of Kubernetes custom resource definitions (CRDs) designed to evolve and replace the old Ingress API with something more flexible, extensible, and expressive. It is being adopted across the industry as a future-proof way to manage traffic in Kubernetes. Instead of one overloaded resource doing everything, responsibilities are explicit. To understand whether this actually improves things in practice, I built a small project. Project goal: This project demonstrates how Kubernetes Gateway API improves traffic management compared to Ingress by deploying a multi-service application and exposing it externally using NGINX Gateway Fabric. Step-by-Step Walkthrough I installed the core tools needed for this walkthrough: As part of its startup process, NGINX Gateway Fabric automatically creates a GatewayClass named nginx. This GatewayClass declares that NGINX Gateway Fabric is responsible for implementing any Gateway that references it. To demonstrate routing behaviour, I deployed three simple Python-based HTTP servers, each representing a different device-specific frontend. All applications were deployed into the same namespace. At this stage, pods are running, but no external traffic can reach them yet. I then created a Gateway resource. The Gateway defines where traffic enters the cluster by specifying listeners (ports and protocols) and referencing the nginx GatewayClass. Describing the gateway shows that the NGINX gateway fabric accepted the gateway, the Gateway was successfully programmed, a service was created and exposed, and no routes are attached yet. This implies that the gateway is live, listening for traffic, but has no routing rules. The HTTPRoute defines how requests are matched, specifies which backend service should receive traffic and attaches itself to a gateway using parentRefs. I created three services and a single HTTPRoute that forwards traffic to the appropriate backend based on request rules. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK:
rules: - host: example.com http: paths: - path: / backend: service: name: web-service port: number: 80 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
rules: - host: example.com http: paths: - path: / backend: service: name: web-service port: number: 80 CODE_BLOCK:
rules: - host: example.com http: paths: - path: / backend: service: name: web-service port: number: 80 - Ingress was focused strictly on HTTP/HTTPS.
- It had limited expressiveness for advanced routing.
- Many features were controller-specific (e.g., annotations).
- Multiple teams touching ingress rules can cause configuration conflicts. - A running Kubernetes cluster (kind, minikube, or managed)
- Basic understanding of Kubernetes YAML
- Willingness to debug version mismatches - I carried out this walkthrough on an EC2 instance to simulate a realistic cloud environment. I launched an instance with sufficient memory and storage, then connected to it remotely using VS Code over SSH. - Before installing any tools, I updated the system’s package index to ensure I was working with the latest available versions. - I installed the core tools needed for this walkthrough: Docker – to run containers
Kind – to run Kubernetes locally inside Docker
kubectl – to interact with the Kubernetes cluster
Helm – to install the NGINX Gateway Fabric controller
- Docker – to run containers
- Kind – to run Kubernetes locally inside Docker
- kubectl – to interact with the Kubernetes cluster
- Helm – to install the NGINX Gateway Fabric controller - Docker – to run containers
- Kind – to run Kubernetes locally inside Docker
- kubectl – to interact with the Kubernetes cluster
- Helm – to install the NGINX Gateway Fabric controller - I created a Kubernetes cluster named gateway-api-demo using Kind and a configuration file. This cluster will host all Gateway API resources and workloads. - I installed the gateway API CRDs, which introduced new Kubernetes resource types such as GatewayClass, Gateway, and HTTPRoute. These are definitions only; they do not route traffic by themselves. They simply tell Kubernetes what kinds of objects are allowed to exist. - With the CRDs installed, I installed NGINX Gateway Fabric using Helm. This component is the actual controller that watches Gateway API resources and turns them into real NGINX configurations. - As part of its startup process, NGINX Gateway Fabric automatically creates a GatewayClass named nginx. This GatewayClass declares that NGINX Gateway Fabric is responsible for implementing any Gateway that references it.
- To demonstrate routing behaviour, I deployed three simple Python-based HTTP servers, each representing a different device-specific frontend. All applications were deployed into the same namespace. - At this stage, pods are running, but no external traffic can reach them yet.
- I then created a Gateway resource. The Gateway defines where traffic enters the cluster by specifying listeners (ports and protocols) and referencing the nginx GatewayClass. - Describing the gateway shows that the NGINX gateway fabric accepted the gateway, the Gateway was successfully programmed, a service was created and exposed, and no routes are attached yet.
- This implies that the gateway is live, listening for traffic, but has no routing rules. - The GatewayClass confirms that NGINX Gateway Fabric is the active controller responsible for handling Gateways that reference it. - At this point, the ngf-gatewayapi-ns namespace contains NGINX Gateway Fabric controller pods and the Gateway and its supporting resources. - I attempted to access the application via: :. I also updated the EC2 security group to allow inbound traffic on the NodePort. - The request failed. This behaviour is expected because the gateway accepts traffic, but there is no backend defined, and no route exists to forward traffic to any service. This is where the httproute comes in. - The HTTPRoute defines how requests are matched, specifies which backend service should receive traffic and attaches itself to a gateway using parentRefs.
- I created three services and a single HTTPRoute that forwards traffic to the appropriate backend based on request rules. After applying the HTTPRoute, traffic could flow freely.
- After applying the HTTPRoute, traffic could flow freely. - After applying the HTTPRoute, traffic could flow freely. - I was able to access the applications successfully as traffic was routed through the gateway and its proxy pods