Kubernetes Pod Stuck in Pending State: Every Cause and Fix (2026)
Your pod says Pending and nothing is happening. Here's how to diagnose every possible reason — insufficient resources, taints, PVC issues, node selectors — and fix them fast.
You run kubectl apply -f deployment.yaml, check the pods, and there it is — STATUS: Pending. No error message. No crash. Just… waiting.
This is one of the most common and frustrating Kubernetes issues. The good news? Pending always has a reason. Kubernetes is just waiting for a condition that hasn't been met yet. Once you know how to read the signals, you can fix it in minutes.
This guide covers every cause of a stuck Pending pod — and exactly how to fix each one.
What Does "Pending" Actually Mean?
When a pod is in Pending state, it means the Kubernetes scheduler has not yet placed the pod on any node. The pod exists in the API server, but no node has picked it up to run it.
There are two stages where this can get stuck:
- Scheduling — The scheduler can't find a suitable node
- Image pulling — The node accepted the pod but can't pull the container image
Understanding this distinction is the first step to diagnosing the issue.
Step 1: Always Start With kubectl describe
Before anything else, run this command. It tells you exactly why the pod is stuck:
kubectl describe pod <pod-name> -n <namespace>Scroll to the Events section at the bottom. This is your diagnostic goldmine. You'll see messages like:
Warning FailedScheduling 0/3 nodes are available: 3 Insufficient cpu.
Warning FailedScheduling 0/3 nodes are available: 3 node(s) had untolerated taint.
Warning FailedMount Unable to attach or mount volumes: timed out waiting for the condition
Each message points to a specific cause. Let's go through all of them.
Cause 1: Insufficient CPU or Memory
The most common cause. Your pod is requesting more CPU or memory than any node has available.
How to diagnose
# Check what your nodes have available
kubectl describe nodes | grep -A 5 "Allocated resources"
# See what your pod is requesting
kubectl describe pod <pod-name> | grep -A 4 "Requests"You'll often see something like:
0/3 nodes are available: 3 Insufficient cpu, 3 Insufficient memory.
How to fix
Option A: Lower your resource requests (if they're too high)
resources:
requests:
cpu: "100m" # was "2000m" — lower it
memory: "128Mi" # was "4Gi" — lower it
limits:
cpu: "500m"
memory: "512Mi"Option B: Scale up your cluster by adding more nodes.
Option C: Free up resources by deleting unused pods or deployments.
Best practice: Always set resource requests. Without them, Kubernetes can't make intelligent scheduling decisions.
Cause 2: Node Selector or Affinity Not Matching
Your pod has a nodeSelector or nodeAffinity rule, but no node has the matching label.
How to diagnose
kubectl describe pod <pod-name> | grep -A 10 "Node-Selectors"
# Look for: "0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector"# Check labels on your nodes
kubectl get nodes --show-labelsHow to fix
Either add the label to a node:
kubectl label node <node-name> disktype=ssdOr remove/fix the selector in your deployment if it's wrong:
# Before (requires label that doesn't exist)
spec:
nodeSelector:
disktype: ssd
# After (remove it if not needed)
spec: {}Cause 3: Node Taint Without a Matching Toleration
Nodes can have taints that repel pods unless the pod explicitly tolerates that taint. Control plane nodes in managed Kubernetes clusters are tainted this way by default.
How to diagnose
# Check taints on nodes
kubectl describe nodes | grep Taints
# Common output:
# Taints: node-role.kubernetes.io/control-plane:NoSchedule
# Taints: dedicated=gpu:NoSchedule# Error message:
# 0/3 nodes are available: 3 node(s) had untolerated taint {dedicated: gpu}How to fix
Add a toleration to your pod spec:
spec:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "gpu"
effect: "NoSchedule"For control plane nodes (if you want to allow scheduling there in a test cluster):
kubectl taint nodes --all node-role.kubernetes.io/control-plane-Cause 4: PersistentVolumeClaim Not Bound
If your pod mounts a PVC and that PVC is in Pending state (no matching PersistentVolume found), the pod will also stay Pending.
How to diagnose
kubectl get pvc -n <namespace>
# STATUS: Pending — this is the problem
kubectl describe pvc <pvc-name>
# Look for: "no persistent volumes available for this claim"How to fix
Option A: Check if a StorageClass exists
kubectl get storageclassIf there's no default StorageClass, PVCs won't be provisioned automatically. You need to install one (e.g., local-path-provisioner for bare metal, or it's pre-configured on EKS/GKE/AKS).
Option B: Create a manual PersistentVolume if you're on bare metal:
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/my-app
storageClassName: standardOption C: Make sure the PVC's storageClassName matches an existing class:
spec:
storageClassName: gp2 # must match kubectl get storageclass output
resources:
requests:
storage: 10GiCause 5: Image Pull Errors (ImagePullBackOff / ErrImagePull)
Technically the pod moved past scheduling, but it's stuck because the container image can't be pulled. You'll see ImagePullBackOff rather than Pending, but it's worth covering here.
How to diagnose
kubectl describe pod <pod-name>
# Events:
# Warning Failed Failed to pull image "myregistry/myapp:latest": rpc error: code = Unknown
# Warning Failed Error: ErrImagePullCommon reasons and fixes
| Reason | Fix |
|---|---|
| Image tag doesn't exist | Check the image name and tag on your registry |
| Private registry, no pull secret | Create an imagePullSecret |
| Registry is down | Check registry availability |
| Wrong image name (typo) | Verify with docker pull <image> locally |
Creating an image pull secret:
kubectl create secret docker-registry regcred \
--docker-server=ghcr.io \
--docker-username=<your-username> \
--docker-password=<your-token> \
-n <namespace>Then reference it in your deployment:
spec:
imagePullSecrets:
- name: regcred
containers:
- name: app
image: ghcr.io/myorg/myapp:v1.2.3Cause 6: Resource Quota Exceeded in Namespace
Namespaces can have ResourceQuota objects that cap total CPU/memory usage. If you're at the limit, no new pods can be scheduled.
How to diagnose
kubectl get resourcequota -n <namespace>
kubectl describe resourcequota -n <namespace>You'll see something like:
Resource Used Hard
cpu 1900m 2000m ← nearly at limit
memory 3.8Gi 4Gi
pods 9 10
How to fix
Either increase the quota (if you have cluster-admin access):
apiVersion: v1
kind: ResourceQuota
metadata:
name: my-quota
namespace: production
spec:
hard:
requests.cpu: "8"
requests.memory: 16Gi
limits.cpu: "16"
limits.memory: 32Gi
pods: "50"Or delete unused resources in that namespace to free up quota.
Cause 7: All Nodes Are Unschedulable (Cordoned)
A node can be manually cordoned — marked as unschedulable — during maintenance. If all your nodes are cordoned, nothing can schedule.
How to diagnose
kubectl get nodes
# STATUS: Ready,SchedulingDisabled — means it's cordonedHow to fix
# Uncordon the node
kubectl uncordon <node-name>Cause 8: Pod Is Waiting for a ConfigMap or Secret That Doesn't Exist
If your pod references a ConfigMap or Secret that doesn't exist, it will stay in Pending (or CreateContainerConfigError).
How to diagnose
kubectl describe pod <pod-name>
# Warning Failed Error: secret "my-app-secrets" not foundHow to fix
Create the missing resource before deploying the pod:
kubectl create secret generic my-app-secrets \
--from-literal=DB_PASSWORD=mypassword \
-n <namespace>kubectl create configmap my-app-config \
--from-env-file=.env.production \
-n <namespace>Quick Diagnosis Checklist
When a pod is stuck in Pending, run through this checklist in order:
# 1. Read the events — this solves 90% of cases
kubectl describe pod <pod-name> -n <namespace>
# 2. Check node capacity
kubectl describe nodes | grep -A 5 "Allocated resources"
# 3. Check PVCs
kubectl get pvc -n <namespace>
# 4. Check resource quotas
kubectl get resourcequota -n <namespace>
# 5. Check node conditions
kubectl get nodes
kubectl describe node <node-name>
# 6. Check pod's node selector / affinity
kubectl get pod <pod-name> -o yaml | grep -A 10 "nodeSelector"Prevention Tips
Avoiding Pending pods in the first place is better than debugging them later.
1. Always define resource requests. Without them, Kubernetes can't schedule intelligently and you'll hit capacity surprises.
2. Use Pod Disruption Budgets. They protect availability during node maintenance.
3. Set up alerts for Pending pods. In Prometheus:
alert: PodStuckPending
expr: kube_pod_status_phase{phase="Pending"} > 0
for: 5m
labels:
severity: warning
annotations:
summary: "Pod {{ $labels.pod }} stuck in Pending state"4. Use Cluster Autoscaler. If you're on a cloud provider, it can automatically add nodes when capacity runs low.
Learn More
If you want to go deep on Kubernetes troubleshooting and production-readiness:
- KodeKloud — The best hands-on K8s labs I've used. Their CKA/CKAD courses walk you through exactly these kinds of real-world scenarios.
Pending pods are always fixable. The key is knowing that Kubernetes leaves detailed breadcrumbs in the Events section — once you know where to look, the cause is almost always obvious within 30 seconds.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Why Your Docker Container Keeps Restarting (and How to Fix It)
CrashLoopBackOff, OOMKilled, exit code 1, exit code 137 — Docker containers restart for specific, diagnosable reasons. Here is how to identify the exact cause and fix it in minutes.
Kubernetes OOMKilled: How I Fixed Out of Memory Errors in Production
OOMKilled crashes killing your pods? Here's the real cause, how to diagnose it fast, and the exact steps to fix it without breaking production.
Kubernetes Troubleshooting Guide 2026: Fix Every Common Problem
The most complete Kubernetes troubleshooting guide for 2026. Learn how to diagnose and fix Pod crashes, ImagePullBackOff, OOMKilled, CrashLoopBackOff, networking issues, PVC problems, node NotReady, and more — with exact kubectl commands.