Tools: Docker and Kubernetes: Complete Production Deployment Guide - Expert Insights

Tools: Docker and Kubernetes: Complete Production Deployment Guide - Expert Insights

Docker and Kubernetes: How They Work Together

Prerequisites: Setting Up Your Development Environment

Creating a Production-Ready Dockerfile

From Docker Run to Kubernetes: Understanding the Concepts

Deploying Your First Application to Kubernetes

Kubernetes Production Best Practices

Liveness and Readiness Probes

ConfigMaps and Secrets

Rolling Updates and Rollbacks

Horizontal Pod Autoscaling

Migrating from Docker Compose to Kubernetes

What Doesn't Translate 1:1

When to Migrate

Monitoring, Logging, and Debugging in Production

Essential kubectl Commands

Common Deployment Issues

Prometheus and Grafana

When Kubernetes Is Worth It (And When It Isn't) I remember the moment I realized Docker Compose wasn't enough anymore. I was running a side project — a small SaaS with maybe 200 active users — on a single DigitalOcean droplet. Docker Compose handled everything: the Node.js API, PostgreSQL, Redis, an Nginx reverse proxy. One YAML file, one docker-compose up, done. Then the database went down at 2 AM. Not a crash — the container just stopped. By the time I woke up and ran docker-compose restart, I'd lost three hours of uptime. When it happened again two weeks later during peak usage, I knew I needed something smarter. Something that could restart failed containers automatically, distribute load across multiple servers, and let me update the API without taking the whole site offline. That's when I started learning Kubernetes. Not because it's trendy or because "everyone uses it now." I needed orchestration — a system that could manage my containers when I couldn't be there. This guide walks you through the path I took: from a working Dockerfile to a production-ready Kubernetes cluster. You'll learn how Docker and Kubernetes work together, when the complexity is worth it, and how to migrate from Compose to K8s without breaking your application. Every command and manifest here is tested and working — the same setup I use today. The first time someone told me "Kubernetes runs Docker containers," I thought it was redundant. If Docker already runs containers, why do I need Kubernetes? Here's the distinction: Docker builds and packages containers. Kubernetes orchestrates and manages them at scale. Think of Docker as the engine that creates a standardized shipping container for your application. It bundles your code, dependencies, and runtime into an image that runs the same way everywhere. When you run docker run, you're starting one container on one machine. Kubernetes is the logistics system that manages hundreds of those containers across multiple machines. It decides where containers run, monitors their health, restarts them when they fail, and handles traffic routing. You tell Kubernetes "I want three copies of this container running at all times," and it makes that happen — even if servers crash or traffic spikes. You need both. Docker creates the container images. Kubernetes deploys and manages them in production. They're not competing tools — Kubernetes uses Docker (or other container runtimes like containerd) under the hood. When you're running one or two containers on one server, Docker Compose is enough. When you need automatic failover, zero-downtime deployments, or horizontal scaling, that's when Kubernetes pays off. Before deploying to Kubernetes, you need a local cluster to test against. Here's the setup I use — the path of least resistance for getting started. Docker Desktop with Kubernetes enabled is the easiest option for Mac and Windows. It bundles everything: Docker, kubectl (the Kubernetes command-line tool), and a single-node Kubernetes cluster. For Linux users, I use k3d — a lightweight Kubernetes distribution that runs in Docker containers: Alternative options: Minikube (well-documented, heavier) or kind (popular in CI pipelines). Here's the Dockerfile I use for Node.js applications in 2026: Why multi-stage builds? The second stage copies only the final artifacts — no build tools, no npm cache, just the runtime. Smaller image, faster pulls. Why node:20-alpine? Alpine Linux is a minimal base image (~5MB vs ~200MB for Debian). Node 20 is the 2026 LTS. Always pin versions — latest breaks deployments. Why a non-root user? If an attacker compromises your application, they shouldn't have root privileges inside the container. Layer caching: COPY package*.json comes before COPY server.js. When you change application code, only the final layer invalidates. Dependency installation stays cached. Rebuilds are fast. The .dockerignore file: Kubernetes has a reputation for complexity, but the core concepts map directly to Docker: Pods are the smallest deployable unit. A Pod runs one or more containers sharing networking and storage. Deployments maintain a desired replica count. If a Pod crashes, Kubernetes starts a new one automatically. Services give Pods a stable IP address and DNS name, load-balancing traffic across replicas. Ingress routes external HTTP/HTTPS traffic to Services — like Nginx, but managed by Kubernetes. Step 1: Push your image to a registry Step 2: Create k8s/deployment.yaml Step 3: Create k8s/service.yaml Step 4: Deploy and verify You should see 3 Pods in Running status. Debugging: Access your app: kubectl get service demo-app-service — look for EXTERNAL-IP. On Docker Desktop it's localhost. 100m = 0.1 CPU cores. 128Mi = 128 mebibytes. If a Pod exceeds 256Mi memory, Kubernetes kills it (OOMKilled). CPU limits throttle instead of kill. How to pick values: Run under load and check docker stats. Start conservative. Add these endpoints to your Node.js app: Without probes, Kubernetes routes traffic to Pods that haven't started yet or have crashed. I've debugged too many "why is my app 500ing" incidents that turned out to be missing probes. Update the image tag, apply, and Kubernetes replaces Pods one at a time with no downtime. Roll back when something breaks: When CPU exceeds 70%, Kubernetes adds Pods. When it drops, Kubernetes removes them. HPA requires the Metrics Server — most managed services (GKE, EKS, AKS) include it by default. Use Kompose for automated conversion: Example docker-compose.yml: Kompose generates deployment and service manifests. Add resource limits, probes, and secrets manually. Volumes: Docker's host-directory mounts become PersistentVolumes and PersistentVolumeClaims. depends_on: Kubernetes doesn't guarantee startup order. Use readiness probes — your app should retry connections until dependencies are ready. Networks: In Kubernetes, Pods communicate via Service DNS names. Your app Deployment reaches Redis at redis-service:6379. Migrate to Kubernetes when: If you're on a single VPS with Docker Compose and it works, don't migrate. Only adopt Kubernetes when the problems it solves are problems you actually have. Pods stuck in Pending: Not enough resources on any Node. Check kubectl describe pod <pod-name>. CrashLoopBackOff: Container keeps crashing. Check kubectl logs <pod-name>. Common causes: missing env vars, bad image, app crashes on startup. Service not routing traffic: Check that Service selector matches Pod labels: kubectl get pods --show-labels. Image pull errors: Check image name and tag. Private registries need an image pull secret. Most issues surface in kubectl describe pod events or kubectl logs. When something breaks, start there. For production monitoring: On GKE, EKS, or AKS, use the built-in monitoring instead — it integrates automatically. Tested environment: Node.js 20.19.2 LTS, Docker 27.1, Kubernetes 1.30 (local k3d cluster) Kubernetes is overkill for most side projects. If you're running a blog, a small SaaS, or an internal tool on one server, Docker Compose is enough. Kubernetes makes sense when: It doesn't make sense when: I run Kubernetes for client projects where uptime matters. I run Docker Compose for my personal blog. The right tool depends on the problem. If you've made it this far, you have everything you need to deploy a real application to Kubernetes. The YAML manifests here are production-ready — I use variations of them in production today. Start small, test locally, and only move to a cloud cluster when you're confident the pieces fit together. The learning curve is steep. But once you've deployed a few apps, the patterns repeat. And when that 2 AM database crash happens again, Kubernetes will restart the Pod before you even wake up. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ -weight: 500;">kubectl version --client -weight: 500;">kubectl cluster-info -weight: 500;">kubectl version --client -weight: 500;">kubectl cluster-info -weight: 500;">kubectl version --client -weight: 500;">kubectl cluster-info -weight: 500;">curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/-weight: 500;">install.sh | bash k3d cluster create dev-cluster -weight: 500;">kubectl get nodes -weight: 500;">curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/-weight: 500;">install.sh | bash k3d cluster create dev-cluster -weight: 500;">kubectl get nodes -weight: 500;">curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/-weight: 500;">install.sh | bash k3d cluster create dev-cluster -weight: 500;">kubectl get nodes # Stage 1: Build stage FROM node:20-alpine AS builder WORKDIR /app COPY package*.json ./ RUN -weight: 500;">npm ci --only=production # Stage 2: Production stage FROM node:20-alpine WORKDIR /app # Create non-root user RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 # Copy dependencies from builder COPY --from=builder /app/node_modules ./node_modules COPY server.js ./ RUN chown -R nodejs:nodejs /app USER nodejs EXPOSE 3000 CMD ["node", "server.js"] # Stage 1: Build stage FROM node:20-alpine AS builder WORKDIR /app COPY package*.json ./ RUN -weight: 500;">npm ci --only=production # Stage 2: Production stage FROM node:20-alpine WORKDIR /app # Create non-root user RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 # Copy dependencies from builder COPY --from=builder /app/node_modules ./node_modules COPY server.js ./ RUN chown -R nodejs:nodejs /app USER nodejs EXPOSE 3000 CMD ["node", "server.js"] # Stage 1: Build stage FROM node:20-alpine AS builder WORKDIR /app COPY package*.json ./ RUN -weight: 500;">npm ci --only=production # Stage 2: Production stage FROM node:20-alpine WORKDIR /app # Create non-root user RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 # Copy dependencies from builder COPY --from=builder /app/node_modules ./node_modules COPY server.js ./ RUN chown -R nodejs:nodejs /app USER nodejs EXPOSE 3000 CMD ["node", "server.js"] node_modules -weight: 500;">npm-debug.log .-weight: 500;">git .gitignore README.md .env .DS_Store *.md node_modules -weight: 500;">npm-debug.log .-weight: 500;">git .gitignore README.md .env .DS_Store *.md node_modules -weight: 500;">npm-debug.log .-weight: 500;">git .gitignore README.md .env .DS_Store *.md -weight: 500;">docker build -t demo-app:v1 . -weight: 500;">docker run -p 3000:3000 demo-app:v1 -weight: 500;">docker build -t demo-app:v1 . -weight: 500;">docker run -p 3000:3000 demo-app:v1 -weight: 500;">docker build -t demo-app:v1 . -weight: 500;">docker run -p 3000:3000 demo-app:v1 -weight: 500;">docker build -t your-username/demo-app:v1 . -weight: 500;">docker login -weight: 500;">docker push your-username/demo-app:v1 -weight: 500;">docker build -t your-username/demo-app:v1 . -weight: 500;">docker login -weight: 500;">docker push your-username/demo-app:v1 -weight: 500;">docker build -t your-username/demo-app:v1 . -weight: 500;">docker login -weight: 500;">docker push your-username/demo-app:v1 apiVersion: apps/v1 kind: Deployment metadata: name: demo-app labels: app: demo-app spec: replicas: 3 selector: matchLabels: app: demo-app template: metadata: labels: app: demo-app spec: containers: - name: demo-app image: your-username/demo-app:v1 ports: - containerPort: 3000 env: - name: PORT value: "3000" - name: NODE_ENV value: "production" apiVersion: apps/v1 kind: Deployment metadata: name: demo-app labels: app: demo-app spec: replicas: 3 selector: matchLabels: app: demo-app template: metadata: labels: app: demo-app spec: containers: - name: demo-app image: your-username/demo-app:v1 ports: - containerPort: 3000 env: - name: PORT value: "3000" - name: NODE_ENV value: "production" apiVersion: apps/v1 kind: Deployment metadata: name: demo-app labels: app: demo-app spec: replicas: 3 selector: matchLabels: app: demo-app template: metadata: labels: app: demo-app spec: containers: - name: demo-app image: your-username/demo-app:v1 ports: - containerPort: 3000 env: - name: PORT value: "3000" - name: NODE_ENV value: "production" apiVersion: v1 kind: Service metadata: name: demo-app--weight: 500;">service spec: selector: app: demo-app ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer apiVersion: v1 kind: Service metadata: name: demo-app--weight: 500;">service spec: selector: app: demo-app ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer apiVersion: v1 kind: Service metadata: name: demo-app--weight: 500;">service spec: selector: app: demo-app ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer -weight: 500;">kubectl apply -f k8s/deployment.yaml -weight: 500;">kubectl apply -f k8s/-weight: 500;">service.yaml -weight: 500;">kubectl get pods -weight: 500;">kubectl get deployment demo-app -weight: 500;">kubectl get -weight: 500;">service demo-app--weight: 500;">service -weight: 500;">kubectl apply -f k8s/deployment.yaml -weight: 500;">kubectl apply -f k8s/-weight: 500;">service.yaml -weight: 500;">kubectl get pods -weight: 500;">kubectl get deployment demo-app -weight: 500;">kubectl get -weight: 500;">service demo-app--weight: 500;">service -weight: 500;">kubectl apply -f k8s/deployment.yaml -weight: 500;">kubectl apply -f k8s/-weight: 500;">service.yaml -weight: 500;">kubectl get pods -weight: 500;">kubectl get deployment demo-app -weight: 500;">kubectl get -weight: 500;">service demo-app--weight: 500;">service -weight: 500;">kubectl describe pod <pod-name> -weight: 500;">kubectl logs <pod-name> -weight: 500;">kubectl logs -f <pod-name> -weight: 500;">kubectl describe pod <pod-name> -weight: 500;">kubectl logs <pod-name> -weight: 500;">kubectl logs -f <pod-name> -weight: 500;">kubectl describe pod <pod-name> -weight: 500;">kubectl logs <pod-name> -weight: 500;">kubectl logs -f <pod-name> resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "256Mi" cpu: "200m" resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "256Mi" cpu: "200m" resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "256Mi" cpu: "200m" livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 3000 initialDelaySeconds: 5 periodSeconds: 5 livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 3000 initialDelaySeconds: 5 periodSeconds: 5 livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 3000 initialDelaySeconds: 5 periodSeconds: 5 app.get('/health', (req, res) => res.json({ -weight: 500;">status: 'healthy' })); app.get('/ready', (req, res) => { if (databaseConnected) { res.json({ -weight: 500;">status: 'ready' }); } else { res.-weight: 500;">status(503).json({ -weight: 500;">status: 'not ready' }); } }); app.get('/health', (req, res) => res.json({ -weight: 500;">status: 'healthy' })); app.get('/ready', (req, res) => { if (databaseConnected) { res.json({ -weight: 500;">status: 'ready' }); } else { res.-weight: 500;">status(503).json({ -weight: 500;">status: 'not ready' }); } }); app.get('/health', (req, res) => res.json({ -weight: 500;">status: 'healthy' })); app.get('/ready', (req, res) => { if (databaseConnected) { res.json({ -weight: 500;">status: 'ready' }); } else { res.-weight: 500;">status(503).json({ -weight: 500;">status: 'not ready' }); } }); apiVersion: v1 kind: ConfigMap metadata: name: demo-app-config data: PORT: "3000" NODE_ENV: "production" LOG_LEVEL: "info" apiVersion: v1 kind: ConfigMap metadata: name: demo-app-config data: PORT: "3000" NODE_ENV: "production" LOG_LEVEL: "info" apiVersion: v1 kind: ConfigMap metadata: name: demo-app-config data: PORT: "3000" NODE_ENV: "production" LOG_LEVEL: "info" envFrom: - configMapRef: name: demo-app-config envFrom: - configMapRef: name: demo-app-config envFrom: - configMapRef: name: demo-app-config -weight: 500;">kubectl create secret generic demo-app-secrets \ --from-literal=DB_PASSWORD=supersecret -weight: 500;">kubectl create secret generic demo-app-secrets \ --from-literal=DB_PASSWORD=supersecret -weight: 500;">kubectl create secret generic demo-app-secrets \ --from-literal=DB_PASSWORD=supersecret envFrom: - secretRef: name: demo-app-secrets envFrom: - secretRef: name: demo-app-secrets envFrom: - secretRef: name: demo-app-secrets strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 -weight: 500;">kubectl rollout undo deployment/demo-app -weight: 500;">kubectl rollout history deployment/demo-app -weight: 500;">kubectl rollout undo deployment/demo-app -weight: 500;">kubectl rollout history deployment/demo-app -weight: 500;">kubectl rollout undo deployment/demo-app -weight: 500;">kubectl rollout history deployment/demo-app apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: demo-app-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: demo-app minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: demo-app-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: demo-app minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: demo-app-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: demo-app minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 -weight: 500;">brew -weight: 500;">install kompose # macOS # Linux: download from GitHub releases kompose convert -weight: 500;">brew -weight: 500;">install kompose # macOS # Linux: download from GitHub releases kompose convert -weight: 500;">brew -weight: 500;">install kompose # macOS # Linux: download from GitHub releases kompose convert version: '3.8' services: app: build: . ports: - "3000:3000" environment: - PORT=3000 - NODE_ENV=production -weight: 500;">restart: unless-stopped redis: image: redis:7-alpine ports: - "6379:6379" -weight: 500;">restart: unless-stopped version: '3.8' services: app: build: . ports: - "3000:3000" environment: - PORT=3000 - NODE_ENV=production -weight: 500;">restart: unless-stopped redis: image: redis:7-alpine ports: - "6379:6379" -weight: 500;">restart: unless-stopped version: '3.8' services: app: build: . ports: - "3000:3000" environment: - PORT=3000 - NODE_ENV=production -weight: 500;">restart: unless-stopped redis: image: redis:7-alpine ports: - "6379:6379" -weight: 500;">restart: unless-stopped -weight: 500;">kubectl get pods -weight: 500;">kubectl describe pod <pod-name> -weight: 500;">kubectl logs <pod-name> -weight: 500;">kubectl logs -f <pod-name> -weight: 500;">kubectl logs -l app=demo-app -weight: 500;">kubectl exec -it <pod-name> -- /bin/sh -weight: 500;">kubectl port-forward pod/<pod-name> 3000:3000 -weight: 500;">kubectl get pods -weight: 500;">kubectl describe pod <pod-name> -weight: 500;">kubectl logs <pod-name> -weight: 500;">kubectl logs -f <pod-name> -weight: 500;">kubectl logs -l app=demo-app -weight: 500;">kubectl exec -it <pod-name> -- /bin/sh -weight: 500;">kubectl port-forward pod/<pod-name> 3000:3000 -weight: 500;">kubectl get pods -weight: 500;">kubectl describe pod <pod-name> -weight: 500;">kubectl logs <pod-name> -weight: 500;">kubectl logs -f <pod-name> -weight: 500;">kubectl logs -l app=demo-app -weight: 500;">kubectl exec -it <pod-name> -- /bin/sh -weight: 500;">kubectl port-forward pod/<pod-name> 3000:3000 - Container runtime (Docker, containerd): Runs individual containers on a single machine - Orchestration platform (Kubernetes): Manages containers across multiple machines - Install Docker Desktop - Open Docker Desktop → Settings → Kubernetes → Enable Kubernetes - Wait a few minutes for the cluster to -weight: 500;">start - You need high availability across multiple servers - You're scaling horizontally - You want zero-downtime deployments - Multiple developers deploy simultaneously - helm -weight: 500;">install prometheus prometheus-community/prometheus - helm -weight: 500;">install grafana grafana/grafana - Configure Prometheus as a Grafana data source - Import the "Kubernetes Cluster Monitoring" dashboard - You're running on multiple servers and need workload distribution - Downtime costs you money — you need automatic failover and rolling updates - You're scaling a team — multiple developers deploying independently - You need fine-grained resource control and autoscaling - Your app fits on one server - You don't have time to learn Kubernetes properly - You're optimizing for simplicity over resilience