All Articles

What is a Kubernetes Node? Explained Simply for Beginners (2026)

Kubernetes nodes are the machines where your containers actually run. Here's what a node is, the difference between worker nodes and control plane nodes, what runs on them, and how to manage node issues.

DevOpsBoysApr 28, 20265 min read
Share:Tweet

In Kubernetes, a node is a machine — physical or virtual — that runs your containers. If Kubernetes is the operating system for your cluster, nodes are the hardware it runs on.

Here's everything you need to know about nodes.


The Two Types of Nodes

Kubernetes clusters have two types of nodes:

Control Plane Nodes (Masters)

These run the Kubernetes control plane — the "brain" of the cluster.

Control Plane Node runs:
├── API Server      ← All kubectl commands go here
├── etcd            ← Stores all cluster state
├── Scheduler       ← Decides which node runs each pod
└── Controller Manager ← Reconciles desired vs actual state

Control plane nodes don't run your application pods (in production setups). They only run Kubernetes system components.

Worker Nodes

These are where your application containers actually run.

Worker Node runs:
├── kubelet         ← Agent that manages pods on this node
├── kube-proxy      ← Handles networking rules
├── Container Runtime  ← containerd or CRI-O (runs containers)
└── Your pods       ← nginx, postgres, myapp, etc.

Most clusters have 1–3 control plane nodes and 3–100+ worker nodes depending on scale.


What Lives on a Node

When a pod gets scheduled to a node, here's what actually happens:

Scheduler assigns pod to Node 3
    │
    ▼
API Server updates etcd: "pod X → node 3"
    │
    ▼
kubelet on Node 3 sees the update
    │
    ▼
kubelet tells containerd: "start this container"
    │
    ▼
containerd pulls image and starts container
    │
    ▼
kubelet reports back: "pod is Running"

The kubelet is the critical component on every worker node — it's the agent that makes it all happen.


Viewing Nodes

bash
# List all nodes
kubectl get nodes
 
# Output:
# NAME            STATUS   ROLES    AGE   VERSION
# node1           Ready    <none>   10d   v1.30.0
# node2           Ready    <none>   10d   v1.30.0
# node3           Ready    <none>   10d   v1.30.0
# controlplane    Ready    control-plane  10d   v1.30.0
 
# Detailed node info
kubectl describe node node1
 
# Node resource usage
kubectl top nodes
# NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
# node1   245m         6%     2048Mi          54%
# node2   180m         4%     1800Mi          48%

Node Conditions

Each node reports its health through conditions:

bash
kubectl get nodes -o wide
 
# Check conditions
kubectl describe node node1 | grep -A 20 Conditions
ConditionStatusMeaning
ReadyTrueNode is healthy, can accept pods
ReadyFalseNode has issues, won't accept pods
MemoryPressureTrueNode is running low on memory
DiskPressureTrueNode disk is nearly full
PIDPressureTrueToo many processes on the node
NetworkUnavailableTrueNetwork not configured correctly

When Ready = False, pods on that node will be evicted and rescheduled elsewhere (after a configurable timeout — default 5 minutes).


Node Resources: CPU and Memory

Every node has a certain amount of CPU and memory. Kubernetes tracks:

  • Capacity — Total resources on the node
  • Allocatable — Resources available for pods (capacity minus system reserved)
bash
kubectl describe node node1 | grep -A 10 "Capacity\|Allocatable"
 
# Capacity:
#   cpu:                4
#   memory:             8024932Ki
#   pods:               110
# Allocatable:
#   cpu:                3920m       ← ~80m reserved for system
#   memory:             7671268Ki   ← some reserved for kubelet/OS
#   pods:               110

When a pod has resource requests defined:

yaml
resources:
  requests:
    cpu: "500m"
    memory: "256Mi"

The scheduler only places the pod on a node that has 500m CPU and 256Mi memory allocatable and available.

bash
# See what's currently allocated on a node
kubectl describe node node1 | grep -A 30 "Allocated resources"

Taints and Tolerations — Controlling Pod Placement

Taints mark a node as special — pods won't be scheduled there unless they explicitly tolerate the taint.

bash
# Add a taint to a node
kubectl taint nodes node3 gpu=true:NoSchedule
 
# Now only pods with this toleration can run on node3:
yaml
# Pod spec — tolerates the GPU taint
spec:
  tolerations:
  - key: "gpu"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"

Common taint patterns:

TaintUse case
node-role.kubernetes.io/control-plane:NoSchedulePrevents app pods on control plane
nvidia.com/gpu:NoScheduleGPU-only pods run on GPU nodes
node.kubernetes.io/not-ready:NoExecuteEvicts pods when node goes NotReady
dedicated=team-a:NoScheduleDedicated nodes for specific team

Node Selectors and Affinity

Node selector — simple: run this pod on nodes with this label.

bash
# Label a node
kubectl label node node3 disktype=ssd
 
# Pod scheduled only on SSD nodes
yaml
spec:
  nodeSelector:
    disktype: ssd

Node affinity — more expressive:

yaml
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/zone
            operator: In
            values: ["ap-south-1a", "ap-south-1b"]

Draining a Node

Draining removes all pods from a node gracefully — used before maintenance, upgrades, or decommissioning:

bash
# Cordon the node first (stop new pods from scheduling here)
kubectl cordon node2
 
# Drain all pods off the node
kubectl drain node2 \
  --ignore-daemonsets \
  --delete-emptydir-data \
  --force
 
# Now the node is empty — do your maintenance
# systemctl restart kubelet  or  apt upgrade
 
# Uncordon to allow pods to schedule again
kubectl uncordon node2

--ignore-daemonsets skips DaemonSet pods (kube-proxy, node-exporter, etc.) since they're on every node by design.


Node Autoscaling

In cloud environments, you don't manage nodes manually. The Cluster Autoscaler or Karpenter add/remove nodes automatically:

Too many Pending pods (not enough resources)
    │
    ▼
Autoscaler detects: "I need more nodes"
    │
    ▼
Adds 2 new nodes to the cloud provider
    │
    ▼
Pending pods get scheduled on new nodes

Later: nodes are idle for 10+ minutes
    │
    ▼
Autoscaler drains and removes idle nodes

With autoscaling, your cluster grows and shrinks with your workload automatically.


Troubleshooting Node Issues

Node shows NotReady:

bash
kubectl describe node node2 | grep -A 5 Conditions
# Look for: MemoryPressure, DiskPressure, NetworkUnavailable
 
# SSH to the node and check kubelet
systemctl status kubelet
journalctl -u kubelet -n 100
 
# Check disk usage
df -h
# If /var/lib/containerd is full → clean up old images
crictl rmi --prune

Node unreachable (SSH needed):

bash
# Check from another node or bastion
ping node2-ip
ssh ubuntu@node2-ip

Pod stuck on specific node:

bash
# Find which node it's on
kubectl get pod mypod -o wide
 
# See node's available resources
kubectl describe node <node-name> | grep -A 10 "Allocated resources"
 
# Check if node is cordoned
kubectl get nodes | grep SchedulingDisabled

Nodes are the foundation everything else in Kubernetes builds on. Once you understand nodes — their resources, conditions, taints, and lifecycle — scheduling problems, resource errors, and cluster scaling make much more sense.

For deeper Kubernetes learning, the CKA exam preparation guide covers nodes, networking, and storage in depth. The Kubernetes Documentation is also excellent for official reference.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments