Tools: How I went from Docker Compose to production EKS without burning AWS budget on mistakes

Tools: How I went from Docker Compose to production EKS without burning AWS budget on mistakes

Source: Dev.to

The Problem with Learning Kubernetes Directly on AWS ## What I Built ## Stage 1: Docker Compose ## Stage 2: Minikube ## The CI/CD Pipeline ## VPC Endpoints: Why They Matter Beyond Security ## Secrets Management Progression ## IAM Design: Least Privilege ## Post-Install: Why Not Everything in Terraform ## Infrastructure Cost Snapshot ## What This Demonstrates ## Try it ## Open to Work Most tutorials for Kubernetes on AWS tell you to: EKS charges $0.10 per hour just for the control plane, before a single EC2 node is added. That is about $72/month for a cluster sitting idle while you troubleshoot ingress routing, secret injection failures, and container networking issues. That is fine for a company budget. It hurts when you are learning on your own. There is a better workflow. Test locally, prove it works, then pay for AWS. A production-grade containerized web application with a staged deployment workflow: App repo: https://github.com/escanut/fastapi-k8s-project Infra repo: https://github.com/escanut/fastapi-aws-infra The app is a FastAPI backend with async request handling and connection pooling connected to PostgreSQL, plus a vanilla JS frontend. Simple product catalog with create, retrieve, delete items. Nothing fancy on purpose. The focus is the infrastructure pattern and delivery workflow. Goal: instant feedback on code changes with zero cluster overhead. Nginx routes /api traffic to FastAPI. Credentials are plain env variables here, intentionally different from production. That contrast is the lesson. Validates: app logic, queries, routing, frontend-backend communication. Does not validate: Kubernetes behavior. Goal: catch Kubernetes failures at zero cost before provisioning AWS. Setup mirrors production closely: Secrets from Kubernetes Secrets, not env vars Postgres as a pod with a PVC Ingress configured like production Broken ingress, wrong ports, and secret issues show up here instead of on a paid cluster. Minikube is not identical to ALB, but it is close enough to surface real problems early. On every push to main: Each image is tagged with commit SHA. Rollback just means redeploying a previous SHA. No guesswork. OIDC is used for short-lived credentials. No static keys stored. NAT data processing costs $0.045/GB. Image pulls, logs, and secrets traffic add up fast. Eight endpoints route AWS-internal traffic privately. NAT remains only for external access during bootstrap. This results to lower cost and reduced exposure. Production password is randomly generated and stored in Secrets Manager. Terraform state is encrypted. Password is not floating around in plaintext. GitHub Actions: scoped OIDC role Node role: read-only ECR Backend pods: read specific secret via IRSA ALB Controller: ALB-only permissions Each component gets only what it needs to perform its task. post-install.sh installs: AWS Load Balancer Controller External Secrets Operator CRDs plus dependency ordering make Terraform Helm messy here. This approach is predictable and aligns with upstream guidance. Endpoints reduce NAT data charges on AWS API traffic. In active pipelines, that tradeoff makes sense. Cost-aware Kubernetes learning path OIDC instead of static keys IRSA for pod-level IAM VPC endpoints for cost and security External Secrets integration Fully automated ALB provisioning Repos are public and documented. Start local, move to Minikube, then EKS. If something breaks or you want to discuss design choices, reach out. I am always refining this setup myself. Currently seeking remote roles in Cloud, DevOps, and Platform Engineering. LinkedIn: https://linkedin.com/in/victorojeje Email: [email protected] GitHub: https://github.com/escanut Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK: services: postgres: image: postgres:15-alpine healthcheck: test: ["CMD-SHELL", "pg_isready -U admin -d products"] backend: build: ./backend environment: DB_HOST: postgres volumes: - ./backend:/app frontend: image: nginx:alpine volumes: - ./nginx.conf:/etc/nginx/conf.d/default.conf Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: services: postgres: image: postgres:15-alpine healthcheck: test: ["CMD-SHELL", "pg_isready -U admin -d products"] backend: build: ./backend environment: DB_HOST: postgres volumes: - ./backend:/app frontend: image: nginx:alpine volumes: - ./nginx.conf:/etc/nginx/conf.d/default.conf CODE_BLOCK: services: postgres: image: postgres:15-alpine healthcheck: test: ["CMD-SHELL", "pg_isready -U admin -d products"] backend: build: ./backend environment: DB_HOST: postgres volumes: - ./backend:/app frontend: image: nginx:alpine volumes: - ./nginx.conf:/etc/nginx/conf.d/default.conf CODE_BLOCK: minikube start eval $(minikube docker-env) minikube addons enable ingress kubectl apply -f dev/k8s/ Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: minikube start eval $(minikube docker-env) minikube addons enable ingress kubectl apply -f dev/k8s/ CODE_BLOCK: minikube start eval $(minikube docker-env) minikube addons enable ingress kubectl apply -f dev/k8s/ CODE_BLOCK: Configure AWS Credentials This uses aws-actions/configure-aws-credentials@v4 Build and push frontend & backend image This runs docker build + push Updates deployment This runs kubectl set image ... Verify This runs kubectl rollout status ... Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: Configure AWS Credentials This uses aws-actions/configure-aws-credentials@v4 Build and push frontend & backend image This runs docker build + push Updates deployment This runs kubectl set image ... Verify This runs kubectl rollout status ... CODE_BLOCK: Configure AWS Credentials This uses aws-actions/configure-aws-credentials@v4 Build and push frontend & backend image This runs docker build + push Updates deployment This runs kubectl set image ... Verify This runs kubectl rollout status ... CODE_BLOCK: Worker → NAT → Internet → AWS API → Internet → NAT → Worker Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: Worker → NAT → Internet → AWS API → Internet → NAT → Worker CODE_BLOCK: Worker → NAT → Internet → AWS API → Internet → NAT → Worker CODE_BLOCK: Stage 1: Plain env var Stage 2: K8s Secret Stage 3: ExternalSecret synced from Secrets Manager Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: Stage 1: Plain env var Stage 2: K8s Secret Stage 3: ExternalSecret synced from Secrets Manager CODE_BLOCK: Stage 1: Plain env var Stage 2: K8s Secret Stage 3: ExternalSecret synced from Secrets Manager - Spin up an EKS cluster - Apply your manifests - Debug why nothing works - Watch your bill climb while you figure it out - Stage 1: Docker Compose for fast local iteration - Stage 2: Minikube to validate Kubernetes behavior before touching AWS - Stage 3: EKS with full CI/CD, AWS Secrets Manager, and automated ALB provisioning - GitHub Actions: scoped OIDC role - Node role: read-only ECR - Backend pods: read specific secret via IRSA - ALB Controller: ALB-only permissions - AWS Load Balancer Controller - External Secrets Operator - Cost-aware Kubernetes learning path - OIDC instead of static keys - IRSA for pod-level IAM - VPC endpoints for cost and security - External Secrets integration - Fully automated ALB provisioning - LinkedIn: https://linkedin.com/in/victorojeje - Email: [email protected] - GitHub: https://github.com/escanut