Tools: I Built a Multi-Service Kubernetes App and Here's What Actually Broke

Tools: I Built a Multi-Service Kubernetes App and Here's What Actually Broke

Source: Dev.to

What I Built ## The Architecture Isn't Random ## Kubernetes Objects I Actually Used ## How Traffic Actually Flows ## Internal Traffic ## External Traffic ## What Actually Broke (and What I Learned) ## Pod IPs Keep Changing ## Service Types Confused Me ## Ingress Didn't Work ## Ingress Controller Wouldn't Schedule ## Local Networking Doesn't Work Like Cloud ## Service Names Didn't Resolve Everywhere ## What I Actually Understand Now ## Why This Matters ## Real Takeaway ## kubernetes #devops #learning #microservices I spent the last few weeks deploying a multi-service voting application on Kubernetes. Not because I needed a voting app. Because I needed to understand how Kubernetes actually handles real application traffic. There's a gap between running a single container in a pod and understanding how multiple services discover each other, how traffic flows internally, and how external requests actually reach your application. This project closed that gap for me. A voting system with five independent components: Each component runs in its own container. Each is managed independently by Kubernetes. None of them know pod IPs. Everything communicates through service discovery. This mirrors how real microservices work in production. I didn't pick this setup arbitrarily. This is what actual distributed systems look like: Kubernetes handles the orchestration. I needed to understand how. Deployments These manage the workloads. They define replica counts and ensure pods get recreated if they fail. Every major component runs as a Deployment. Pods The smallest unit Kubernetes schedules. They're ephemeral. They die and get recreated. You never access them directly. Services This is where it clicked for me. Services provide stable DNS names and IPs. Pods can change IPs constantly. Services don't. All internal communication goes through Services. Ingress Defines HTTP routing rules for external traffic. Host-based and path-based routing through a single entry point. Here's what tripped me up: Ingress resources don't do anything by themselves. Ingress Controller This is the actual component that receives and processes traffic. It runs as a pod and dynamically configures itself based on Ingress rules. Without an Ingress Controller installed, your Ingress rules are useless. I learned this the hard way. No pod IPs anywhere. Service DNS gets resolved automatically by Kubernetes. From the browser to the application: Ingress operates at the HTTP level. It's the production-grade way to expose applications. Pods were getting recreated automatically. Their IPs changed every time. Hardcoding IPs didn't work. Solution: Use Services. Always. Services provide stable endpoints. This is what they're designed for. I didn't understand why there were multiple Service types or when to use which one. Solution: ClusterIP is for internal communication only. NodePort exposes services on node IPs (useful for testing, not for production). Ingress is the right way to handle external HTTP traffic. I created Ingress resources. Traffic still wasn't reaching my apps. Solution: You need an Ingress Controller installed separately. The Ingress resource is just configuration. The controller is what actually processes traffic. Once I installed the controller, everything worked. The controller pod was stuck in pending state. Solution: In my local cluster, I needed to fix node labels and tolerations so the controller could schedule on the control-plane node. This doesn't happen in cloud environments, but it matters in local setups. External access from my browser didn't work directly in my container-based local cluster. Solution: Port forwarding. I forwarded the Ingress Controller port locally. This simulates how cloud load balancers work but adapted for local development. Service names weren't resolving across namespaces or from outside the cluster. Solution: Kubernetes service DNS is namespace-scoped by default. I learned to use fully qualified domain names when needed and understood where DNS resolution actually works. Before this project, I could write Kubernetes manifests. But I didn't really get how the pieces connected. This isn't about running containers. It's about understanding how Kubernetes: Once this mental model clicked, advanced topics started making sense. Build it once to make it work. Break it to understand why it works. I could have just deployed this app using a tutorial and called it done. But I wouldn't have learned how service discovery actually functions, or why Ingress controllers exist, or what happens when pods get recreated. The debugging forced me to understand the platform, not just the syntax. If you're learning Kubernetes, pick a multi-service application and deploy it. Then break it. Then fix it. That's where the understanding comes from. What's been the hardest part of Kubernetes for you? Drop a comment. Full code and setup instructions: kubernetes-sample-voting-app-project1 Architecture diagram and detailed breakdown: Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse - Voting frontend (where users vote) - Results frontend (where users see results) - Redis (acting as a queue) - PostgreSQL (persistent storage) - Worker service (processes votes asynchronously) - Frontend services are stateless and can scale horizontally - Data services are isolated for persistence - Communication happens via stable network abstractions - External traffic enters through a controlled entry point - ClusterIP for internal-only communication (Redis, PostgreSQL) - NodePort temporarily for testing frontends before I understood Ingress - Voting frontend sends votes to Redis using the Redis Service name - Worker reads from Redis using the Redis Service name - Worker writes results to PostgreSQL using the database Service name - Results frontend reads from PostgreSQL using the database Service name - User sends HTTP request - Request hits the Ingress Controller - Ingress rules get evaluated - Traffic forwards to the correct Service - Service load-balances to backend pods - Kubernetes networking is service-driven, not pod-driven - Ingress needs both rules and a controller to function - Local clusters behave differently than cloud clusters - Service discovery happens through DNS, not hardcoded IPs - Debugging requires understanding both the platform and the application - Routes traffic between services - Discovers services dynamically - Separates internal and external networking - Enforces declarative state