What is a Kubernetes Node? Explained Simply for Beginners (2026)
Kubernetes nodes are the machines where your containers actually run. Here's what a node is, the difference between worker nodes and control plane nodes, what runs on them, and how to manage node issues.
In Kubernetes, a node is a machine — physical or virtual — that runs your containers. If Kubernetes is the operating system for your cluster, nodes are the hardware it runs on.
Here's everything you need to know about nodes.
The Two Types of Nodes
Kubernetes clusters have two types of nodes:
Control Plane Nodes (Masters)
These run the Kubernetes control plane — the "brain" of the cluster.
Control Plane Node runs:
├── API Server ← All kubectl commands go here
├── etcd ← Stores all cluster state
├── Scheduler ← Decides which node runs each pod
└── Controller Manager ← Reconciles desired vs actual state
Control plane nodes don't run your application pods (in production setups). They only run Kubernetes system components.
Worker Nodes
These are where your application containers actually run.
Worker Node runs:
├── kubelet ← Agent that manages pods on this node
├── kube-proxy ← Handles networking rules
├── Container Runtime ← containerd or CRI-O (runs containers)
└── Your pods ← nginx, postgres, myapp, etc.
Most clusters have 1–3 control plane nodes and 3–100+ worker nodes depending on scale.
What Lives on a Node
When a pod gets scheduled to a node, here's what actually happens:
Scheduler assigns pod to Node 3
│
▼
API Server updates etcd: "pod X → node 3"
│
▼
kubelet on Node 3 sees the update
│
▼
kubelet tells containerd: "start this container"
│
▼
containerd pulls image and starts container
│
▼
kubelet reports back: "pod is Running"
The kubelet is the critical component on every worker node — it's the agent that makes it all happen.
Viewing Nodes
# List all nodes
kubectl get nodes
# Output:
# NAME STATUS ROLES AGE VERSION
# node1 Ready <none> 10d v1.30.0
# node2 Ready <none> 10d v1.30.0
# node3 Ready <none> 10d v1.30.0
# controlplane Ready control-plane 10d v1.30.0
# Detailed node info
kubectl describe node node1
# Node resource usage
kubectl top nodes
# NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
# node1 245m 6% 2048Mi 54%
# node2 180m 4% 1800Mi 48%Node Conditions
Each node reports its health through conditions:
kubectl get nodes -o wide
# Check conditions
kubectl describe node node1 | grep -A 20 Conditions| Condition | Status | Meaning |
|---|---|---|
Ready | True | Node is healthy, can accept pods |
Ready | False | Node has issues, won't accept pods |
MemoryPressure | True | Node is running low on memory |
DiskPressure | True | Node disk is nearly full |
PIDPressure | True | Too many processes on the node |
NetworkUnavailable | True | Network not configured correctly |
When Ready = False, pods on that node will be evicted and rescheduled elsewhere (after a configurable timeout — default 5 minutes).
Node Resources: CPU and Memory
Every node has a certain amount of CPU and memory. Kubernetes tracks:
- Capacity — Total resources on the node
- Allocatable — Resources available for pods (capacity minus system reserved)
kubectl describe node node1 | grep -A 10 "Capacity\|Allocatable"
# Capacity:
# cpu: 4
# memory: 8024932Ki
# pods: 110
# Allocatable:
# cpu: 3920m ← ~80m reserved for system
# memory: 7671268Ki ← some reserved for kubelet/OS
# pods: 110When a pod has resource requests defined:
resources:
requests:
cpu: "500m"
memory: "256Mi"The scheduler only places the pod on a node that has 500m CPU and 256Mi memory allocatable and available.
# See what's currently allocated on a node
kubectl describe node node1 | grep -A 30 "Allocated resources"Taints and Tolerations — Controlling Pod Placement
Taints mark a node as special — pods won't be scheduled there unless they explicitly tolerate the taint.
# Add a taint to a node
kubectl taint nodes node3 gpu=true:NoSchedule
# Now only pods with this toleration can run on node3:# Pod spec — tolerates the GPU taint
spec:
tolerations:
- key: "gpu"
operator: "Equal"
value: "true"
effect: "NoSchedule"Common taint patterns:
| Taint | Use case |
|---|---|
node-role.kubernetes.io/control-plane:NoSchedule | Prevents app pods on control plane |
nvidia.com/gpu:NoSchedule | GPU-only pods run on GPU nodes |
node.kubernetes.io/not-ready:NoExecute | Evicts pods when node goes NotReady |
dedicated=team-a:NoSchedule | Dedicated nodes for specific team |
Node Selectors and Affinity
Node selector — simple: run this pod on nodes with this label.
# Label a node
kubectl label node node3 disktype=ssd
# Pod scheduled only on SSD nodesspec:
nodeSelector:
disktype: ssdNode affinity — more expressive:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/zone
operator: In
values: ["ap-south-1a", "ap-south-1b"]Draining a Node
Draining removes all pods from a node gracefully — used before maintenance, upgrades, or decommissioning:
# Cordon the node first (stop new pods from scheduling here)
kubectl cordon node2
# Drain all pods off the node
kubectl drain node2 \
--ignore-daemonsets \
--delete-emptydir-data \
--force
# Now the node is empty — do your maintenance
# systemctl restart kubelet or apt upgrade
# Uncordon to allow pods to schedule again
kubectl uncordon node2--ignore-daemonsets skips DaemonSet pods (kube-proxy, node-exporter, etc.) since they're on every node by design.
Node Autoscaling
In cloud environments, you don't manage nodes manually. The Cluster Autoscaler or Karpenter add/remove nodes automatically:
Too many Pending pods (not enough resources)
│
▼
Autoscaler detects: "I need more nodes"
│
▼
Adds 2 new nodes to the cloud provider
│
▼
Pending pods get scheduled on new nodes
Later: nodes are idle for 10+ minutes
│
▼
Autoscaler drains and removes idle nodes
With autoscaling, your cluster grows and shrinks with your workload automatically.
Troubleshooting Node Issues
Node shows NotReady:
kubectl describe node node2 | grep -A 5 Conditions
# Look for: MemoryPressure, DiskPressure, NetworkUnavailable
# SSH to the node and check kubelet
systemctl status kubelet
journalctl -u kubelet -n 100
# Check disk usage
df -h
# If /var/lib/containerd is full → clean up old images
crictl rmi --pruneNode unreachable (SSH needed):
# Check from another node or bastion
ping node2-ip
ssh ubuntu@node2-ipPod stuck on specific node:
# Find which node it's on
kubectl get pod mypod -o wide
# See node's available resources
kubectl describe node <node-name> | grep -A 10 "Allocated resources"
# Check if node is cordoned
kubectl get nodes | grep SchedulingDisabledNodes are the foundation everything else in Kubernetes builds on. Once you understand nodes — their resources, conditions, taints, and lifecycle — scheduling problems, resource errors, and cluster scaling make much more sense.
For deeper Kubernetes learning, the CKA exam preparation guide covers nodes, networking, and storage in depth. The Kubernetes Documentation is also excellent for official reference.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Build a Kubernetes Cluster with kubeadm from Scratch (2026)
Step-by-step guide to building a real multi-node Kubernetes cluster using kubeadm — no managed services, no shortcuts.
How to Build a DevOps Home Lab for Free in 2026
You don't need expensive hardware to practice DevOps. Here's how to build a complete home lab with Kubernetes, CI/CD, and monitoring using free tools and cloud free tiers.
How to Crack the CKA Exam in 2026: Study Plan, Resources, and Tips
Complete CKA exam prep guide for 2026 — what to study, how to practice, which resources actually help, and tips to pass on the first attempt.