Tools
Tools: I Ran 9 Kubernetes Labs on a Local KIND Cluster — Here Is Everything I Learned - 2025 Update
The Setup: KIND on WSL
Assignment 1: Pods, ReplicaSets, and Deployments
Lab 1: Your First Pod
Lab 2: ReplicaSets — the First Sign of Real Resilience
Lab 3: Deployments — Zero Downtime Updates
Assignment 2: Auto-Scaling and Health Management
Lab 4: HPA — Let Kubernetes Decide How Many Pods You Need
Lab 5: Readiness Probes — the Traffic Gate
Lab 6: Liveness Probes — the Self-Healing Mechanism
Assignment 3: Services and Networking
Lab 7: ClusterIP — Stable Internal Communication
Lab 8: NodePort — Opening the Door from Outside
Lab 9: LoadBalancer — the Cloud-Native Way to Go External
What I Would Tell Anyone Starting Kubernetes
What Is Next By Vivian Chiamaka Okose | DevOps Engineer If you have been putting off learning Kubernetes because it feels overwhelming, this post is for you. This week I completed a 9 hands-on labs covering the core building blocks of Kubernetes: Pods, ReplicaSets, Deployments, Horizontal Pod Autoscaler, health probes, and all three Service types. I ran everything locally on a KIND (Kubernetes IN Docker) cluster inside WSL Ubuntu on Windows, with zero cloud costs. Here is exactly what I did, what broke, and what I now understand that I did not before. Before I could run a single kubectl command, I needed a cluster. I chose KIND because it runs Kubernetes entirely inside Docker containers on your local machine. If you already have Docker, you are most of the way there. Within two minutes I had a real Kubernetes node showing Ready. That felt good. A Pod is the atomic unit in Kubernetes. Everything else in the system is built around it. I created one two ways. The imperative way (fast, not repeatable): The declarative way (YAML, production standard): The key insight: YAML is how you work in production because it is version-controllable, reviewable, and reproducible. The imperative method is fine for quick experiments but you would never use it to manage a real system. A single Pod has a problem: if it crashes or is deleted, it is gone. You would have to manually recreate it. That is not acceptable in production. A ReplicaSet fixes this by ensuring a fixed number of Pod replicas are always running. I applied this, confirmed 3 Pods were running, then deleted one: Before I could even blink, Kubernetes had already created a replacement. That moment is when auto-healing stops being a concept and becomes something you have actually seen. I also scaled from 3 to 5 by changing replicas: 3 to replicas: 5 and reapplying. Two extra Pods appeared immediately. The catch: ReplicaSets do not support rolling updates. That is where Deployments come in. A Deployment manages a ReplicaSet and adds the features you actually need for production: rolling updates and rollback. This config means: spin up 1 new Pod before killing an old one, never leave any Pods unavailable. That is zero downtime. To update the nginx image: To roll back instantly: The rollout history command showed me the revision log like a git blame for infrastructure. That is powerful. The Horizontal Pod Autoscaler reads CPU (or memory) metrics and scales your Deployment up or down automatically. For this to work on KIND, I needed to install the metrics-server with a flag for local clusters: Then I created an HPA: And in a second terminal, I hammered it with a load generator: Watching kubectl get hpa --watch while the load generator ran was one of those moments that makes DevOps genuinely fun. The Pod count went from 1 to 5. When I killed the load generator, it scaled back down on its own. This is why autoscaling matters: you pay for what you use, and your app handles spikes without anyone manually touching it. A readiness probe answers the question: "Is this Pod ready to receive user traffic?" Without a readiness probe, Kubernetes sends traffic to a Pod the moment it starts, even if the application inside is still warming up. That causes 502 errors during deployments. With a readiness probe, the Pod only enters the Service's endpoint list after passing the health check. You get clean deployments with no cold-start errors reaching users. A liveness probe answers a different question: "Is this running container still healthy?" To see it in action, I deleted the nginx index.html file inside a running Pod: After 3 consecutive failed probes (30 seconds), Kubernetes restarted the container. The RESTARTS counter went from 0 to 1. No manual action required. The real-world case for this: imagine an app that hits a deadlock or runs out of memory and becomes unresponsive but the process is still technically running. Without a liveness probe, that Pod sits there broken forever. With one, Kubernetes restarts it within minutes. This assignment was the one that changed how I think about Kubernetes networking. Pod IPs are ephemeral. Every time a Pod restarts or is replaced, it gets a new IP. If another service is hard-coding that IP, it breaks. A ClusterIP Service solves this by providing: I proved it works by launching a busybox Pod and making requests from inside the cluster: The DNS lookup resolved to the ClusterIP and the request succeeded. That is microservice communication working exactly as it should. NodePort opens a specific port (between 30000-32767) on every node, making your app reachable from outside the cluster. On KIND, since there is no real node IP accessible from the host, I used port-forward: Opening http://localhost:8080 in the browser showed the NGINX welcome page. External access confirmed. In a real cloud environment (Azure, AWS, GCP), a LoadBalancer Service automatically provisions a public IP address and routes internet traffic to your Pods. In KIND, there is no cloud behind it, so I used MetalLB to simulate this. After configuring an IP address pool from the Docker bridge subnet, I created a LoadBalancer Service and got an external IP assigned from that pool: A curl against that external IP returned the NGINX welcome page. In Azure, that IP would be a real public internet address. MetalLB let me understand the behaviour without spending a penny on cloud resources. Start with Pods. Not because they are what you use in production (you use Deployments), but because understanding what a Pod is makes every other concept click faster. Break things on purpose. Delete a Pod. Break a readiness probe. Kill a container. The fastest way to understand Kubernetes is to watch it recover from failure. Health probes are not optional. Every production Deployment needs both a readiness probe and a liveness probe. Not having them is like deploying blind. Services exist because Pod IPs are unreliable. Once you understand that one sentence, all three service types make sense immediately. KIND is excellent for local learning. You get a real Kubernetes cluster with no cloud bill. The only gotchas are the metrics-server TLS flag and needing MetalLB for LoadBalancer behaviour. Next up: Kubernetes Ingress, ConfigMaps, Secrets, and persistent storage. Each week I am writing up what I actually did, what broke, and what I learned from it. If you are on a similar path, follow along. And if you are already deep into Kubernetes, I would love to know what concept took the longest to click for you. Vivian Chiamaka Okose
DevOps Engineer LinkedIn | GitHub Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse