Tools: I Ran 9 Kubernetes Labs on a Local KIND Cluster — Here Is Everything I Learned - 2025 Update

Tools: I Ran 9 Kubernetes Labs on a Local KIND Cluster — Here Is Everything I Learned - 2025 Update

The Setup: KIND on WSL

Assignment 1: Pods, ReplicaSets, and Deployments

Lab 1: Your First Pod

Lab 2: ReplicaSets — the First Sign of Real Resilience

Lab 3: Deployments — Zero Downtime Updates

Assignment 2: Auto-Scaling and Health Management

Lab 4: HPA — Let Kubernetes Decide How Many Pods You Need

Lab 5: Readiness Probes — the Traffic Gate

Lab 6: Liveness Probes — the Self-Healing Mechanism

Assignment 3: Services and Networking

Lab 7: ClusterIP — Stable Internal Communication

Lab 8: NodePort — Opening the Door from Outside

Lab 9: LoadBalancer — the Cloud-Native Way to Go External

What I Would Tell Anyone Starting Kubernetes

What Is Next By Vivian Chiamaka Okose | DevOps Engineer If you have been putting off learning Kubernetes because it feels overwhelming, this post is for you. This week I completed a 9 hands-on labs covering the core building blocks of Kubernetes: Pods, ReplicaSets, Deployments, Horizontal Pod Autoscaler, health probes, and all three Service types. I ran everything locally on a KIND (Kubernetes IN Docker) cluster inside WSL Ubuntu on Windows, with zero cloud costs. Here is exactly what I did, what broke, and what I now understand that I did not before. Before I could run a single kubectl command, I needed a cluster. I chose KIND because it runs Kubernetes entirely inside Docker containers on your local machine. If you already have Docker, you are most of the way there. Within two minutes I had a real Kubernetes node showing Ready. That felt good. A Pod is the atomic unit in Kubernetes. Everything else in the system is built around it. I created one two ways. The imperative way (fast, not repeatable): The declarative way (YAML, production standard): The key insight: YAML is how you work in production because it is version-controllable, reviewable, and reproducible. The imperative method is fine for quick experiments but you would never use it to manage a real system. A single Pod has a problem: if it crashes or is deleted, it is gone. You would have to manually recreate it. That is not acceptable in production. A ReplicaSet fixes this by ensuring a fixed number of Pod replicas are always running. I applied this, confirmed 3 Pods were running, then deleted one: Before I could even blink, Kubernetes had already created a replacement. That moment is when auto-healing stops being a concept and becomes something you have actually seen. I also scaled from 3 to 5 by changing replicas: 3 to replicas: 5 and reapplying. Two extra Pods appeared immediately. The catch: ReplicaSets do not support rolling updates. That is where Deployments come in. A Deployment manages a ReplicaSet and adds the features you actually need for production: rolling updates and rollback. This config means: spin up 1 new Pod before killing an old one, never leave any Pods unavailable. That is zero downtime. To update the nginx image: To roll back instantly: The rollout history command showed me the revision log like a git blame for infrastructure. That is powerful. The Horizontal Pod Autoscaler reads CPU (or memory) metrics and scales your Deployment up or down automatically. For this to work on KIND, I needed to install the metrics-server with a flag for local clusters: Then I created an HPA: And in a second terminal, I hammered it with a load generator: Watching kubectl get hpa --watch while the load generator ran was one of those moments that makes DevOps genuinely fun. The Pod count went from 1 to 5. When I killed the load generator, it scaled back down on its own. This is why autoscaling matters: you pay for what you use, and your app handles spikes without anyone manually touching it. A readiness probe answers the question: "Is this Pod ready to receive user traffic?" Without a readiness probe, Kubernetes sends traffic to a Pod the moment it starts, even if the application inside is still warming up. That causes 502 errors during deployments. With a readiness probe, the Pod only enters the Service's endpoint list after passing the health check. You get clean deployments with no cold-start errors reaching users. A liveness probe answers a different question: "Is this running container still healthy?" To see it in action, I deleted the nginx index.html file inside a running Pod: After 3 consecutive failed probes (30 seconds), Kubernetes restarted the container. The RESTARTS counter went from 0 to 1. No manual action required. The real-world case for this: imagine an app that hits a deadlock or runs out of memory and becomes unresponsive but the process is still technically running. Without a liveness probe, that Pod sits there broken forever. With one, Kubernetes restarts it within minutes. This assignment was the one that changed how I think about Kubernetes networking. Pod IPs are ephemeral. Every time a Pod restarts or is replaced, it gets a new IP. If another service is hard-coding that IP, it breaks. A ClusterIP Service solves this by providing: I proved it works by launching a busybox Pod and making requests from inside the cluster: The DNS lookup resolved to the ClusterIP and the request succeeded. That is microservice communication working exactly as it should. NodePort opens a specific port (between 30000-32767) on every node, making your app reachable from outside the cluster. On KIND, since there is no real node IP accessible from the host, I used port-forward: Opening http://localhost:8080 in the browser showed the NGINX welcome page. External access confirmed. In a real cloud environment (Azure, AWS, GCP), a LoadBalancer Service automatically provisions a public IP address and routes internet traffic to your Pods. In KIND, there is no cloud behind it, so I used MetalLB to simulate this. After configuring an IP address pool from the Docker bridge subnet, I created a LoadBalancer Service and got an external IP assigned from that pool: A curl against that external IP returned the NGINX welcome page. In Azure, that IP would be a real public internet address. MetalLB let me understand the behaviour without spending a penny on cloud resources. Start with Pods. Not because they are what you use in production (you use Deployments), but because understanding what a Pod is makes every other concept click faster. Break things on purpose. Delete a Pod. Break a readiness probe. Kill a container. The fastest way to understand Kubernetes is to watch it recover from failure. Health probes are not optional. Every production Deployment needs both a readiness probe and a liveness probe. Not having them is like deploying blind. Services exist because Pod IPs are unreliable. Once you understand that one sentence, all three service types make sense immediately. KIND is excellent for local learning. You get a real Kubernetes cluster with no cloud bill. The only gotchas are the metrics-server TLS flag and needing MetalLB for LoadBalancer behaviour. Next up: Kubernetes Ingress, ConfigMaps, Secrets, and persistent storage. Each week I am writing up what I actually did, what broke, and what I learned from it. If you are on a similar path, follow along. And if you are already deep into Kubernetes, I would love to know what concept took the longest to click for you. Vivian Chiamaka Okose

DevOps Engineer LinkedIn | GitHub Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

# Install KIND -weight: 500;">curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64 chmod +x ./kind -weight: 600;">sudo mv ./kind /usr/local/bin/kind # Create the cluster kind create cluster --name k8s-labs -weight: 500;">kubectl get nodes # Install KIND -weight: 500;">curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64 chmod +x ./kind -weight: 600;">sudo mv ./kind /usr/local/bin/kind # Create the cluster kind create cluster --name k8s-labs -weight: 500;">kubectl get nodes # Install KIND -weight: 500;">curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64 chmod +x ./kind -weight: 600;">sudo mv ./kind /usr/local/bin/kind # Create the cluster kind create cluster --name k8s-labs -weight: 500;">kubectl get nodes -weight: 500;">kubectl run nginx-pod --image=nginx -weight: 500;">kubectl get pods -weight: 500;">kubectl run nginx-pod --image=nginx -weight: 500;">kubectl get pods -weight: 500;">kubectl run nginx-pod --image=nginx -weight: 500;">kubectl get pods apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80 apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80 apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80 -weight: 500;">kubectl apply -f nginx-pod.yaml -weight: 500;">kubectl describe pod nginx-pod -weight: 500;">kubectl logs nginx-pod -weight: 500;">kubectl exec -it nginx-pod -- /bin/bash -weight: 500;">kubectl apply -f nginx-pod.yaml -weight: 500;">kubectl describe pod nginx-pod -weight: 500;">kubectl logs nginx-pod -weight: 500;">kubectl exec -it nginx-pod -- /bin/bash -weight: 500;">kubectl apply -f nginx-pod.yaml -weight: 500;">kubectl describe pod nginx-pod -weight: 500;">kubectl logs nginx-pod -weight: 500;">kubectl exec -it nginx-pod -- /bin/bash apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-replicaset spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.21.1 apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-replicaset spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.21.1 apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-replicaset spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.21.1 -weight: 500;">kubectl delete pod nginx-replicaset-xxxxx -weight: 500;">kubectl get pods -weight: 500;">kubectl delete pod nginx-replicaset-xxxxx -weight: 500;">kubectl get pods -weight: 500;">kubectl delete pod nginx-replicaset-xxxxx -weight: 500;">kubectl get pods strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 -weight: 500;">kubectl set image deployment/nginx-deployment nginx=nginx:1.23.0 -weight: 500;">kubectl rollout -weight: 500;">status deployment/nginx-deployment -weight: 500;">kubectl set image deployment/nginx-deployment nginx=nginx:1.23.0 -weight: 500;">kubectl rollout -weight: 500;">status deployment/nginx-deployment -weight: 500;">kubectl set image deployment/nginx-deployment nginx=nginx:1.23.0 -weight: 500;">kubectl rollout -weight: 500;">status deployment/nginx-deployment -weight: 500;">kubectl rollout undo deployment/nginx-deployment -weight: 500;">kubectl rollout history deployment/nginx-deployment -weight: 500;">kubectl rollout undo deployment/nginx-deployment -weight: 500;">kubectl rollout history deployment/nginx-deployment -weight: 500;">kubectl rollout undo deployment/nginx-deployment -weight: 500;">kubectl rollout history deployment/nginx-deployment -weight: 500;">kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -weight: 500;">kubectl patch deployment metrics-server -n kube-system --type=json \ -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]' -weight: 500;">kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -weight: 500;">kubectl patch deployment metrics-server -n kube-system --type=json \ -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]' -weight: 500;">kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -weight: 500;">kubectl patch deployment metrics-server -n kube-system --type=json \ -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]' -weight: 500;">kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=5 -weight: 500;">kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=5 -weight: 500;">kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=5 -weight: 500;">kubectl run -i --tty load-generator --rm --image=busybox:1.28 \ ---weight: 500;">restart=Never -- /bin/sh -c \ "while sleep 0.01; do -weight: 500;">wget -q -O- http://php-apache; done" -weight: 500;">kubectl run -i --tty load-generator --rm --image=busybox:1.28 \ ---weight: 500;">restart=Never -- /bin/sh -c \ "while sleep 0.01; do -weight: 500;">wget -q -O- http://php-apache; done" -weight: 500;">kubectl run -i --tty load-generator --rm --image=busybox:1.28 \ ---weight: 500;">restart=Never -- /bin/sh -c \ "while sleep 0.01; do -weight: 500;">wget -q -O- http://php-apache; done" readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 failureThreshold: 3 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 failureThreshold: 3 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 failureThreshold: 3 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 10 periodSeconds: 10 failureThreshold: 3 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 10 periodSeconds: 10 failureThreshold: 3 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 10 periodSeconds: 10 failureThreshold: 3 -weight: 500;">kubectl exec -it <pod-name> -- rm /usr/share/nginx/html/index.html -weight: 500;">kubectl get pods --watch -weight: 500;">kubectl exec -it <pod-name> -- rm /usr/share/nginx/html/index.html -weight: 500;">kubectl get pods --watch -weight: 500;">kubectl exec -it <pod-name> -- rm /usr/share/nginx/html/index.html -weight: 500;">kubectl get pods --watch apiVersion: v1 kind: Service metadata: name: nginx-svc spec: selector: app: nginx ports: - port: 80 targetPort: 80 type: ClusterIP apiVersion: v1 kind: Service metadata: name: nginx-svc spec: selector: app: nginx ports: - port: 80 targetPort: 80 type: ClusterIP apiVersion: v1 kind: Service metadata: name: nginx-svc spec: selector: app: nginx ports: - port: 80 targetPort: 80 type: ClusterIP -weight: 500;">kubectl exec -it tester -- sh -c "-weight: 500;">wget -qO- http://nginx-svc | head -5" -weight: 500;">kubectl exec -it tester -- nslookup nginx-svc -weight: 500;">kubectl exec -it tester -- sh -c "-weight: 500;">wget -qO- http://nginx-svc | head -5" -weight: 500;">kubectl exec -it tester -- nslookup nginx-svc -weight: 500;">kubectl exec -it tester -- sh -c "-weight: 500;">wget -qO- http://nginx-svc | head -5" -weight: 500;">kubectl exec -it tester -- nslookup nginx-svc spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30080 spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30080 spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30080 -weight: 500;">kubectl port-forward svc/nginx-nodeport 8080:80 -weight: 500;">kubectl port-forward svc/nginx-nodeport 8080:80 -weight: 500;">kubectl port-forward svc/nginx-nodeport 8080:80 -weight: 500;">kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml -weight: 500;">kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml -weight: 500;">kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml -weight: 500;">kubectl get svc nginx-loadbalancer # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) # nginx-loadbalancer LoadBalancer 10.96.xxx.xxx 172.18.255.200 80:xxxxx/TCP -weight: 500;">kubectl get svc nginx-loadbalancer # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) # nginx-loadbalancer LoadBalancer 10.96.xxx.xxx 172.18.255.200 80:xxxxx/TCP -weight: 500;">kubectl get svc nginx-loadbalancer # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) # nginx-loadbalancer LoadBalancer 10.96.xxx.xxx 172.18.255.200 80:xxxxx/TCP - A stable virtual IP that never changes - A stable DNS name (e.g. nginx-svc.default.svc.cluster.local) - Automatic load balancing across all Ready Pods