Tools: Manual Kubernetes Cluster Deployment. My step-by-step story.

Tools: Manual Kubernetes Cluster Deployment. My step-by-step story.

Prerequisites and Environment Setup

Preparing the Nodes

Set Hostnames and Hosts File

Disable Swap

Load Required Kernel Modules

Installing the Container Runtime

Installing containerd

Remove Docker if Present

Installing kubeadm, kubelet, and kubectl

Initialize the Kubernetes Control Plane

Configure Network Settings

Initialize with kubeadm

Configure kubectl

Join Worker Nodes to the Cluster

Setting Up Cluster Networking (CNI)

Installing Calico

Monitor Pod Status

(Optional) Deploy the Metrics Server

Deploy Your First Application

Example: Deploying NGINX

Cluster Management Tips

How is manual Kubernetes installation different from using managed solutions?

Can I use Docker as my container runtime?

What if I lose my Kubernetes ‘kubeadm join’ command?

My nodes stay in NotReady status after joining. Why? When I first decided to deploy a Kubernetes cluster manually, I have to admit, it felt intimidating. Even with years of tech experience, starting from zero and setting each component myself looked like a big task. But as I went through the steps, I realized that building my cluster from the ground up gave me a deep understanding of Kubernetes. This story will show you exactly how I did it. I followed a hands-on path using kubeadm and set this up across a few cloud VMs, but this process works with any Linux servers. Disclosure: This content was produced with AI technology support and may feature companies I have affiliations with. I will walk you through everything I did: preparing each node, setting up the container engine, installing key Kubernetes tools, getting networking working, checking if everything is healthy, and launching my first app. If you want to get certified or really learn how Kubernetes ticks, I believe this step-by-step process is one of the best ways to do it. Before I got started, I checked my setup: I usually spin up my environments on Google Cloud or VirtualBox if I test locally, but honestly, any provider works fine, or even a couple of Raspberry Pis if you want. Manually installing Kubernetes always starts with cleaning up and tuning your operating systems. I logged into each machine with a user that had sudo rights. To keep things clear, I gave each node its own name. Then, I edited /etc/hosts on every node and added lines like this so each node would resolve the others’ names. I ran into issues the first time I forgot to do this. Kubernetes will not even start if you have swap enabled. Kubernetes needs some networking support to be enabled in the Linux kernel. I ran these commands: Kubernetes does not run containers itself. It needs a container runtime. These days, containerd is a safe default and works great. I set this up by running: I learned the hard way to check the config file. In /etc/containerd/config.toml, I made sure this part existed: For me, Docker was pre-installed on some older VMs. Since Kubernetes now recommends containerd or CRI-O, I removed Docker if it was there to avoid conflicts. These three tools are the main pieces you need on each node to run and manage Kubernetes itself. I installed all three everywhere. The most crucial part happens on my k8s-master node. This is where the brains of the cluster come online. I ran this one more time on the control plane before starting: The pod network CIDR is important. It must match the CNI plugin I will use later. For Calico, I used: The output showed me lots of details. It checked my system, created TLS certs, grabbed container images, and started the control plane. At the end, it gave me two things: I set up kubectl as my normal (non-root) user so I did not have to log in as root every time. On each worker machine, I pasted the full kubeadm join line that I got earlier. Each node ran checks and then securely joined the cluster. If it failed, I found that double-checking firewalls or the join token did the trick. After workers joined in, they showed up as "NotReady." Turns out, Kubernetes needs a network plugin before pods can talk to each other. I like Calico. It is solid and has good docs. On the control plane, I ran: Sometimes I needed a different network CIDR, so I downloaded the YAML file, edited the CALICO_IPV4POOL_CIDR line, and then applied it. I always kept an eye on the system pods to see when they are up and running: When Calico pods and coredns were running, my nodes finally showed “Ready”: This moment always feels good. To see pod and node CPU or memory usage with kubectl top, I needed the metrics server. Setting this up was fast. After a minute, I checked the pod status, then ran: Suddenly, I could see real-time stats for everything. Once my cluster was running, I wanted to launch something real. I started with NGINX. I created a deployment and exposed it using a NodePort. To get the port number, I ran: Then, I opened my browser and visited <WorkerNodeIP>:<NodePort>. When the NGINX welcome page appeared, I knew everything had worked. At this point, I often find that keeping track of different cloud provider tools and learning paths can get overwhelming, especially when scaling up beyond personal projects. This is where platforms like Canvas Cloud AI can make a difference. It lets you experiment visually with cloud architectures-including Kubernetes-and offers guided hands-on scenarios, tailored templates, and compare services across AWS, Azure, GCP, and OCI. The free learning paths, cheat sheets, and embeddable resources are especially helpful for both new learners and those expanding their cloud skills portfolio. In my experience, building Kubernetes by hand teaches you every layer of the system. With managed services (like GKE or EKS), or when using tools like minikube, all the hard stuff is already done. But when I face a real cluster issue, I am always glad I did a manual install at least once. It really helps with troubleshooting, understanding network plugins, and it gave me the confidence to pursue Kubernetes certifications. I used Docker for years, but Kubernetes stopped supporting it directly after version 1.24. Now, I use containerd for every new cluster. If you really need Docker, you must set up an extra compatibility layer called the Docker shim (cri-dockerd), but I recommend switching to containerd or CRI-O. It works better and keeps things simple. This happened to me when I closed my terminal too early. Not a problem! On the control plane, I ran: That gave me a fresh join command with a new token for the next worker node. Every time my nodes were stuck as "NotReady," it was almost always because networking was not configured yet. I double-checked that I installed the CNI plugin (like Calico) from the control plane node. I also checked firewalls to make sure pod and node ports were open. By going through this manual Kubernetes cluster deployment, I gained real knowledge of Kubernetes’ inner workings. It made troubleshooting less scary, and I now feel prepared to handle more advanced or production setups. I really recommend doing it at least once for anyone working with Kubernetes. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ -weight: 600;">sudo hostnamectl set-hostname k8s-master -weight: 600;">sudo hostnamectl set-hostname k8s-worker1 -weight: 600;">sudo hostnamectl set-hostname k8s-master -weight: 600;">sudo hostnamectl set-hostname k8s-worker1 -weight: 600;">sudo hostnamectl set-hostname k8s-master -weight: 600;">sudo hostnamectl set-hostname k8s-worker1 192.168.1.10 k8s-master 192.168.1.11 k8s-worker1 192.168.1.12 k8s-worker2 192.168.1.10 k8s-master 192.168.1.11 k8s-worker1 192.168.1.12 k8s-worker2 192.168.1.10 k8s-master 192.168.1.11 k8s-worker1 192.168.1.12 k8s-worker2 -weight: 600;">sudo swapoff -a -weight: 600;">sudo sed -i '/ swap / s/^/#/' /etc/fstab -weight: 600;">sudo swapoff -a -weight: 600;">sudo sed -i '/ swap / s/^/#/' /etc/fstab -weight: 600;">sudo swapoff -a -weight: 600;">sudo sed -i '/ swap / s/^/#/' /etc/fstab -weight: 600;">sudo modprobe overlay -weight: 600;">sudo modprobe br_netfilter -weight: 600;">sudo tee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF -weight: 600;">sudo sysctl --system -weight: 600;">sudo modprobe overlay -weight: 600;">sudo modprobe br_netfilter -weight: 600;">sudo tee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF -weight: 600;">sudo sysctl --system -weight: 600;">sudo modprobe overlay -weight: 600;">sudo modprobe br_netfilter -weight: 600;">sudo tee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF -weight: 600;">sudo sysctl --system -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">install -y containerd -weight: 600;">sudo mkdir -p /etc/containerd containerd config default | -weight: 600;">sudo tee /etc/containerd/config.toml -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">install -y containerd -weight: 600;">sudo mkdir -p /etc/containerd containerd config default | -weight: 600;">sudo tee /etc/containerd/config.toml -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">install -y containerd -weight: 600;">sudo mkdir -p /etc/containerd containerd config default | -weight: 600;">sudo tee /etc/containerd/config.toml [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">restart containerd -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">enable containerd -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">restart containerd -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">enable containerd -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">restart containerd -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">enable containerd -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">install -y -weight: 500;">apt-transport-https ca-certificates -weight: 500;">curl -weight: 600;">sudo -weight: 500;">curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /" | -weight: 600;">sudo tee /etc/-weight: 500;">apt/sources.list.d/kubernetes.list -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">install -y kubelet=1.33.2-1.1 kubeadm=1.33.2-1.1 -weight: 500;">kubectl=1.33.2-1.1 -weight: 600;">sudo -weight: 500;">apt-mark hold kubelet kubeadm -weight: 500;">kubectl -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">install -y -weight: 500;">apt-transport-https ca-certificates -weight: 500;">curl -weight: 600;">sudo -weight: 500;">curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /" | -weight: 600;">sudo tee /etc/-weight: 500;">apt/sources.list.d/kubernetes.list -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">install -y kubelet=1.33.2-1.1 kubeadm=1.33.2-1.1 -weight: 500;">kubectl=1.33.2-1.1 -weight: 600;">sudo -weight: 500;">apt-mark hold kubelet kubeadm -weight: 500;">kubectl -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">install -y -weight: 500;">apt-transport-https ca-certificates -weight: 500;">curl -weight: 600;">sudo -weight: 500;">curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /" | -weight: 600;">sudo tee /etc/-weight: 500;">apt/sources.list.d/kubernetes.list -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo -weight: 500;">apt-get -weight: 500;">install -y kubelet=1.33.2-1.1 kubeadm=1.33.2-1.1 -weight: 500;">kubectl=1.33.2-1.1 -weight: 600;">sudo -weight: 500;">apt-mark hold kubelet kubeadm -weight: 500;">kubectl -weight: 600;">sudo sysctl --system -weight: 600;">sudo sysctl --system -weight: 600;">sudo sysctl --system -weight: 600;">sudo kubeadm init \ --pod-network-cidr=192.168.0.0/16 \ --cri-socket=unix:///run/containerd/containerd.sock -weight: 600;">sudo kubeadm init \ --pod-network-cidr=192.168.0.0/16 \ --cri-socket=unix:///run/containerd/containerd.sock -weight: 600;">sudo kubeadm init \ --pod-network-cidr=192.168.0.0/16 \ --cri-socket=unix:///run/containerd/containerd.sock mkdir -p $HOME/.kube -weight: 600;">sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config -weight: 600;">sudo chown $(id -u):$(id -g) $HOME/.kube/config mkdir -p $HOME/.kube -weight: 600;">sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config -weight: 600;">sudo chown $(id -u):$(id -g) $HOME/.kube/config mkdir -p $HOME/.kube -weight: 600;">sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config -weight: 600;">sudo chown $(id -u):$(id -g) $HOME/.kube/config -weight: 600;">sudo kubeadm join <MASTER_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH> -weight: 600;">sudo kubeadm join <MASTER_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH> -weight: 600;">sudo kubeadm join <MASTER_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH> -weight: 500;">kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml -weight: 500;">kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml -weight: 500;">kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml -weight: 500;">kubectl get pods -n kube-system -weight: 500;">kubectl get pods -n kube-system -weight: 500;">kubectl get pods -n kube-system -weight: 500;">kubectl get nodes -weight: 500;">kubectl get nodes -weight: 500;">kubectl get nodes -weight: 500;">kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -weight: 500;">kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -weight: 500;">kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -weight: 500;">kubectl top nodes -weight: 500;">kubectl top pods -weight: 500;">kubectl top nodes -weight: 500;">kubectl top pods -weight: 500;">kubectl top nodes -weight: 500;">kubectl top pods -weight: 500;">kubectl create deployment nginx --image=nginx -weight: 500;">kubectl expose deployment nginx --port=80 --type=NodePort -weight: 500;">kubectl create deployment nginx --image=nginx -weight: 500;">kubectl expose deployment nginx --port=80 --type=NodePort -weight: 500;">kubectl create deployment nginx --image=nginx -weight: 500;">kubectl expose deployment nginx --port=80 --type=NodePort -weight: 500;">kubectl get svc nginx -weight: 500;">kubectl get svc nginx -weight: 500;">kubectl get svc nginx kubeadm token create --print-join-command kubeadm token create --print-join-command kubeadm token create --print-join-command - I made sure to have at least two servers. I needed one control plane node (some folks call this the master), and at least one worker. I wanted to be practical, so I used two VMs. - I installed Ubuntu 22.04. Each VM got at least 2 vCPU and 2GB RAM for the master and 1 vCPU, 2GB RAM for the worker. - I gave each machine a static private IP. I checked to be sure my cluster’s pod network range would not overlap with these IPs. - I tested that all my nodes could “see” each other over the network across all needed Kubernetes ports. - I set up firewalls and cloud security so Kubernetes traffic (like port 6443 for the API and node ports 30000-32767) would not get blocked. - Exact steps to set up -weight: 500;">kubectl for my own user - A command that looks like kubeadm join ... for bringing worker nodes into the cluster - I make sure kubeadm, kubelet, and -weight: 500;">kubectl match in version on all nodes. - I protect my environments by using secure SSH keys, updating firewalls, and locking down the admin.conf file. - For backup, I regularly save the /etc/kubernetes directory, mainly admin.conf and the PKI folder. - I always watch the health of my cluster using -weight: 500;">kubectl get pods -n kube-system and -weight: 500;">kubectl get nodes. - I read the Kubernetes docs before doing upgrades or changes to make sure I avoid problems. - If I want to manage the cluster from my laptop, I copy the admin.conf to my ~/.kube/config directory.