Kubernetes Evicted Pods — Why It Happens and How to Fix It (2026)
Pods suddenly showing 'Evicted' status in Kubernetes? Here's every reason nodes evict pods and exactly how to prevent it from happening again.
You check your cluster and see pods with Evicted status. They're not running, they're not pending — they've been forcibly removed by Kubernetes. Here's what's happening and how to stop it.
What Eviction Means
The kubelet (agent running on each node) monitors node resources. When a node runs low on disk, memory, or inodes — kubelet starts evicting pods to reclaim resources.
This is Kubernetes protecting the node from complete failure. Evicted pods won't restart on the same node automatically.
kubectl get pods -n your-namespace | grep Evicted
kubectl describe pod <evicted-pod> -n your-namespaceThe Events section will tell you exactly why:
Evicted: The node was low on resource: memory. Threshold quantity: 100Mi, available: 45Mi.
Cause 1: Node Running Out of Memory
Most common cause. The node has less free memory than the eviction threshold (default 100Mi).
# Check node memory
kubectl describe node <node-name> | grep -A 5 "Conditions:"
kubectl top nodesFix 1: Set proper resource requests and limits on your pods
If pods don't have resource requests, scheduler places them without checking capacity. Set them:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"Fix 2: Add more nodes or use a larger instance type
Fix 3: Increase eviction threshold (not recommended — hides the real problem):
# In kubelet config
evictionHard:
memory.available: "200Mi"Cause 2: Node Disk Pressure
Node's root disk is nearly full — usually from container logs, images, or overlay filesystems.
kubectl describe node <node-name> | grep DiskPressure
df -h # on the nodeFix: Clean up disk space on the node
# Remove stopped containers
docker system prune -f
# Remove unused images
docker image prune -a -f
# Check what's eating disk
du -sh /var/lib/docker/*
du -sh /var/log/pods/*Long-term fix: Configure log rotation, set imagePullPolicy: IfNotPresent, add a larger data volume.
Cause 3: Inode Exhaustion
Disk might show 80% free but inodes are at 100%. This looks like disk pressure but df -h won't show it.
# Check inodes
df -iIf /var/lib/docker or /var/log shows 100% inode usage:
# Find directories with many small files
find /var/log -type f | wc -l
find /var/lib/docker/overlay2 -type f | wc -lFix: Clean up old container layers, logs, or small files. Consider reformatting with larger inode ratio on a new volume.
Cause 4: PID Pressure
Too many processes running on the node — kubelet evicts low-priority pods to free PID slots.
kubectl describe node <node-name> | grep PIDPressure
cat /proc/sys/kernel/pid_max # on the nodeCleaning Up Evicted Pods
Evicted pods stay in your cluster in Evicted state (they don't auto-delete). Clean them up:
# Delete all evicted pods in a namespace
kubectl get pods -n your-namespace | grep Evicted | \
awk '{print $1}' | xargs kubectl delete pod -n your-namespace
# Delete across all namespaces
kubectl get pods --all-namespaces | grep Evicted | \
awk '{print $1, $2}' | xargs -n 2 sh -c 'kubectl delete pod $1 -n $0'Prevent Future Evictions
1. Always Set Resource Requests
This is the single most important thing. Without requests, scheduler ignores capacity, nodes get overloaded.
2. Use PodDisruptionBudget
Protect critical pods from eviction:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: myapp-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: myapp3. Set Priority Classes
System-critical pods get higher priority:
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
---
spec:
priorityClassName: high-priority4. Monitor Node Resources
Alert before you hit eviction thresholds:
# Prometheus alert
- alert: NodeMemoryLow
expr: node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes < 0.15
for: 5m
labels:
severity: warningQuick Checklist
| Issue | Check | Fix |
|---|---|---|
| Memory pressure | kubectl top nodes | Add resource requests/limits, scale nodes |
| Disk pressure | df -h on node | Clean images/logs, add storage |
| Inode exhaustion | df -i on node | Clean small files, larger inode volume |
| PID pressure | kubectl describe node | Kill runaway processes |
| Evicted pods piling up | kubectl get pods | grep Evicted | Delete evicted pods |
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
AWS EKS Pods Stuck in Pending State: Causes and Fixes
Pods stuck in Pending on EKS are caused by a handful of known issues — insufficient node capacity, taint mismatches, PVC problems, and more. Here's how to diagnose and fix each one.
cert-manager Certificate Not Ready: Causes and Fixes
cert-manager Certificate stuck in a non-Ready state is a common Kubernetes TLS issue. This guide covers every root cause — DNS challenges, RBAC, rate limits, and issuer problems — with step-by-step fixes.
CI/CD Pipeline Is Broken: How to Debug and Fix GitHub Actions, Jenkins & ArgoCD Failures (2026)
Your CI/CD pipeline failed and you don't know why. This complete debugging guide covers GitHub Actions, Jenkins, and ArgoCD failures with real error messages and step-by-step fixes.