All Articles

Build a Kubernetes Cluster with kubeadm from Scratch (2026)

Step-by-step guide to building a real multi-node Kubernetes cluster using kubeadm — no managed services, no shortcuts.

DevOpsBoysApr 3, 20263 min read
Share:Tweet

Managed Kubernetes (EKS, GKE, AKS) is great for production. But if you want to truly understand how Kubernetes works — every component, every config file — you need to build it yourself at least once.

This is that guide. Two VMs, one real cluster.


What You'll Build

  • 1 control plane node (master)
  • 1 worker node
  • Kubernetes 1.32
  • Calico for networking (CNI)
  • Working kubectl from your local machine

Prerequisites

Two Ubuntu 22.04 VMs (local with VirtualBox, cloud VMs, or VPS):

  • Control plane: minimum 2 CPUs, 2GB RAM
  • Worker: minimum 1 CPU, 1GB RAM
  • Both on the same network
  • SSH access to both

Step 1: Prepare Both Nodes

Run this on both control plane and worker:

bash
# Disable swap (Kubernetes requires it)
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab
 
# Load required kernel modules
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
 
modprobe overlay
modprobe br_netfilter
 
# Set sysctl params
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
 
sysctl --system

Step 2: Install containerd (Both Nodes)

Kubernetes uses containerd as the container runtime:

bash
apt-get update
apt-get install -y ca-certificates curl gnupg
 
# Add Docker's GPG key (containerd is in Docker's repo)
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
  gpg --dearmor -o /etc/apt/keyrings/docker.gpg
 
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
  tee /etc/apt/sources.list.d/docker.list
 
apt-get update
apt-get install -y containerd.io
 
# Configure containerd to use systemd cgroup driver
containerd config default | tee /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
 
systemctl restart containerd
systemctl enable containerd

Step 3: Install kubeadm, kubelet, kubectl (Both Nodes)

bash
apt-get update
apt-get install -y apt-transport-https ca-certificates curl gpg
 
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | \
  gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
 
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \
  https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | \
  tee /etc/apt/sources.list.d/kubernetes.list
 
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
 
systemctl enable kubelet

Step 4: Initialize the Control Plane

Run this only on the control plane node:

bash
kubeadm init \
  --pod-network-cidr=192.168.0.0/16 \
  --apiserver-advertise-address=<CONTROL_PLANE_IP>

Wait for it to complete. You'll see:

Your Kubernetes control-plane has initialized successfully!

Set up kubectl:

bash
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Copy the kubeadm join command from the output — you'll need it for the worker.


Step 5: Install Calico CNI (Control Plane Only)

Without a CNI, pods can't communicate:

bash
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
 
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml

Wait for Calico to be ready:

bash
watch kubectl get pods -n calico-system

Step 6: Join the Worker Node

Run the join command on the worker node (from kubeadm init output):

bash
kubeadm join <CONTROL_PLANE_IP>:6443 \
  --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash>

Back on the control plane, verify:

bash
kubectl get nodes

You should see both nodes in Ready state within 2 minutes.


Step 7: Test Your Cluster

Deploy a test workload:

bash
kubectl create deployment nginx --image=nginx --replicas=2
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pods -o wide
kubectl get svc nginx

Access on http://<WORKER_IP>:<NodePort> — Nginx should respond.


Access from Your Local Machine

Copy the kubeconfig from control plane to your laptop:

bash
scp user@<CONTROL_PLANE_IP>:~/.kube/config ~/.kube/config-kubeadm
export KUBECONFIG=~/.kube/config-kubeadm
kubectl get nodes

What You've Learned

By building this cluster you now understand:

  • Why swap needs to be disabled (kubelet requirement)
  • How containerd integrates with Kubernetes
  • What kubeadm init actually does (certificates, etcd, API server, scheduler, controller manager)
  • How CNI plugins enable pod networking
  • How worker nodes join the cluster

Every "magic" thing EKS does for you — you just did manually.


Resources

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments