All Articles

Kubernetes Evicted Pods — Why It Happens and How to Fix It (2026)

Pods suddenly showing 'Evicted' status in Kubernetes? Here's every reason nodes evict pods and exactly how to prevent it from happening again.

DevOpsBoysApr 4, 20263 min read
Share:Tweet

You check your cluster and see pods with Evicted status. They're not running, they're not pending — they've been forcibly removed by Kubernetes. Here's what's happening and how to stop it.


What Eviction Means

The kubelet (agent running on each node) monitors node resources. When a node runs low on disk, memory, or inodes — kubelet starts evicting pods to reclaim resources.

This is Kubernetes protecting the node from complete failure. Evicted pods won't restart on the same node automatically.

bash
kubectl get pods -n your-namespace | grep Evicted
kubectl describe pod <evicted-pod> -n your-namespace

The Events section will tell you exactly why:

Evicted: The node was low on resource: memory. Threshold quantity: 100Mi, available: 45Mi.

Cause 1: Node Running Out of Memory

Most common cause. The node has less free memory than the eviction threshold (default 100Mi).

bash
# Check node memory
kubectl describe node <node-name> | grep -A 5 "Conditions:"
kubectl top nodes

Fix 1: Set proper resource requests and limits on your pods

If pods don't have resource requests, scheduler places them without checking capacity. Set them:

yaml
resources:
  requests:
    memory: "256Mi"
    cpu: "250m"
  limits:
    memory: "512Mi"
    cpu: "500m"

Fix 2: Add more nodes or use a larger instance type

Fix 3: Increase eviction threshold (not recommended — hides the real problem):

yaml
# In kubelet config
evictionHard:
  memory.available: "200Mi"

Cause 2: Node Disk Pressure

Node's root disk is nearly full — usually from container logs, images, or overlay filesystems.

bash
kubectl describe node <node-name> | grep DiskPressure
df -h  # on the node

Fix: Clean up disk space on the node

bash
# Remove stopped containers
docker system prune -f
 
# Remove unused images
docker image prune -a -f
 
# Check what's eating disk
du -sh /var/lib/docker/*
du -sh /var/log/pods/*

Long-term fix: Configure log rotation, set imagePullPolicy: IfNotPresent, add a larger data volume.


Cause 3: Inode Exhaustion

Disk might show 80% free but inodes are at 100%. This looks like disk pressure but df -h won't show it.

bash
# Check inodes
df -i

If /var/lib/docker or /var/log shows 100% inode usage:

bash
# Find directories with many small files
find /var/log -type f | wc -l
find /var/lib/docker/overlay2 -type f | wc -l

Fix: Clean up old container layers, logs, or small files. Consider reformatting with larger inode ratio on a new volume.


Cause 4: PID Pressure

Too many processes running on the node — kubelet evicts low-priority pods to free PID slots.

bash
kubectl describe node <node-name> | grep PIDPressure
cat /proc/sys/kernel/pid_max  # on the node

Cleaning Up Evicted Pods

Evicted pods stay in your cluster in Evicted state (they don't auto-delete). Clean them up:

bash
# Delete all evicted pods in a namespace
kubectl get pods -n your-namespace | grep Evicted | \
  awk '{print $1}' | xargs kubectl delete pod -n your-namespace
 
# Delete across all namespaces
kubectl get pods --all-namespaces | grep Evicted | \
  awk '{print $1, $2}' | xargs -n 2 sh -c 'kubectl delete pod $1 -n $0'

Prevent Future Evictions

1. Always Set Resource Requests

This is the single most important thing. Without requests, scheduler ignores capacity, nodes get overloaded.

2. Use PodDisruptionBudget

Protect critical pods from eviction:

yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: myapp-pdb
spec:
  minAvailable: 1
  selector:
    matchLabels:
      app: myapp

3. Set Priority Classes

System-critical pods get higher priority:

yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority
value: 1000000
---
spec:
  priorityClassName: high-priority

4. Monitor Node Resources

Alert before you hit eviction thresholds:

yaml
# Prometheus alert
- alert: NodeMemoryLow
  expr: node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes < 0.15
  for: 5m
  labels:
    severity: warning

Quick Checklist

IssueCheckFix
Memory pressurekubectl top nodesAdd resource requests/limits, scale nodes
Disk pressuredf -h on nodeClean images/logs, add storage
Inode exhaustiondf -i on nodeClean small files, larger inode volume
PID pressurekubectl describe nodeKill runaway processes
Evicted pods piling upkubectl get pods | grep EvictedDelete evicted pods
Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments