Kubernetes Pods Stuck in Terminating State: How to Fix It in 2026
Pods won't delete and stuck in Terminating? Here's how to diagnose finalizers, graceful shutdown issues, and force-delete stuck pods step by step.
You run kubectl delete pod my-app-pod and wait. And wait. The pod just sits there in Terminating state, refusing to disappear. You try again. Nothing. You stare at kubectl get pods hoping it'll resolve itself. It won't.
This is one of the most common — and most annoying — issues in Kubernetes. Let's break down exactly why it happens and how to fix it every single time.
Why Pods Get Stuck in Terminating
When you delete a pod, Kubernetes doesn't just nuke it. There's a graceful shutdown sequence:
- The pod's state is set to
Terminating - The
preStophook runs (if defined) SIGTERMis sent to the container processes- Kubernetes waits up to
terminationGracePeriodSeconds(default: 30s) SIGKILLis sent if the process is still running- Finalizers are processed
- The pod object is removed from the API server
If any of these steps gets stuck, the pod stays in Terminating forever. The usual culprits are:
- Finalizers that never complete
- PVC/PV finalizers blocking deletion
- preStop hooks that hang indefinitely
- Node is unreachable (the kubelet can't do the actual cleanup)
- Stuck volume unmounts preventing container removal
- A broken admission webhook blocking the delete operation
Let's diagnose and fix each one.
Step 1: Check What's Actually Happening
First, get the full picture of the stuck pod:
kubectl get pod my-app-pod -o yamlLook at three things:
metadata.finalizers— any finalizers listed?metadata.deletionTimestamp— when was deletion requested?status.conditions— any error conditions?
A quicker way to check finalizers specifically:
kubectl get pod my-app-pod -o jsonpath='{.metadata.finalizers}'If you see finalizers listed, that's likely your problem. If the finalizer list is empty, the issue is elsewhere — probably the node or a volume.
Also check the pod's events:
kubectl describe pod my-app-podScroll to the Events section at the bottom. You'll often see messages like FailedKillPod, FailedSync, or volume-related errors that tell you exactly what's blocking termination.
Step 2: Fixing Finalizer-Stuck Pods
Finalizers are the number one reason pods get stuck in Terminating. A finalizer is a key in metadata.finalizers that tells Kubernetes: "Don't delete this object until I say it's okay."
If the controller responsible for a finalizer is broken, deleted, or misconfigured, the finalizer never gets removed and the pod hangs forever.
Remove the Finalizer Manually
kubectl patch pod my-app-pod -p '{"metadata":{"finalizers":null}}'That's it. The pod should disappear within seconds. If you want to remove a specific finalizer while keeping others:
kubectl patch pod my-app-pod --type='json' -p='[{"op": "remove", "path": "/metadata/finalizers/0"}]'PVC/PV Finalizer Blocking Pod Deletion
Sometimes the pod itself doesn't have finalizers, but the PVC it's attached to does. The kubernetes.io/pvc-protection finalizer prevents PVC deletion while a pod is using it — but if there's a circular dependency, things get stuck.
Check your PVCs:
kubectl get pvc -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.finalizers}{"\n"}{end}'If a PVC is also stuck in Terminating, patch it:
kubectl patch pvc my-pvc -p '{"metadata":{"finalizers":null}}'Warning: Removing finalizers manually skips the cleanup that finalizer was supposed to do. For PVC protection finalizers, make sure no other pod is actively using that volume before you remove it.
Step 3: Handling Node-Level Issues
If the node where the pod is running is down or unreachable, the kubelet can't execute the graceful shutdown. The pod will sit in Terminating until the node comes back or you intervene.
Check the node status:
kubectl get nodes
kubectl describe node <node-name>If the node shows NotReady:
Option A: Wait for the Node to Recover
If the node is temporarily unreachable (network blip, maintenance), the pod will terminate once the kubelet reconnects. The default pod-eviction-timeout is 5 minutes.
Option B: Force Delete the Pod
If the node isn't coming back:
kubectl delete pod my-app-pod --grace-period=0 --forceThis removes the pod from the API server immediately. The actual container may still be running on the dead node — but Kubernetes won't schedule a replacement until the old pod object is gone, so force deletion is the right call here.
Option C: Delete the Node
If the node is permanently gone (terminated cloud instance, dead hardware):
kubectl delete node <node-name>This will trigger cleanup of all pods that were on that node.
Step 4: Stuck Volumes
Volume unmount failures are a sneaky cause of Terminating pods. If a volume (especially cloud block storage like EBS, Persistent Disks, or Cinder volumes) can't be detached from the node, the pod can't fully terminate.
Check for volume attachment issues:
kubectl get volumeattachmentsLook for attachments referencing the stuck pod's PV. If you see stale attachments:
kubectl delete volumeattachment <attachment-name>On managed Kubernetes services like DigitalOcean Kubernetes or EKS, you might also need to check the cloud provider's console to verify the volume isn't stuck in an "attaching" or "detaching" state at the infrastructure level.
Step 5: preStop Hooks That Hang
If your pod spec has a preStop hook that makes an HTTP call or runs a script, and that hook hangs, the pod won't proceed with shutdown until terminationGracePeriodSeconds expires.
Check your pod spec:
kubectl get pod my-app-pod -o jsonpath='{.spec.containers[*].lifecycle}'If you see a preStop hook making a call to a service that's down, that's your blocker. The fix:
- Short term: Force delete the pod with
--grace-period=0 --force - Long term: Add timeouts to your preStop hooks and make sure they're resilient to failures
Example of a safer preStop hook:
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- "curl -s --max-time 5 http://localhost:8080/shutdown || true"The --max-time 5 and || true ensure the hook completes even if the endpoint is unreachable.
Step 6: Admission Webhooks Blocking Deletion
This one is rare but infuriating. If you have a ValidatingWebhookConfiguration or MutatingWebhookConfiguration that intercepts DELETE operations and the webhook server is down, every delete request will hang.
Check your webhooks:
kubectl get validatingwebhookconfiguration
kubectl get mutatingwebhookconfigurationLook for webhooks with failurePolicy: Fail that target pod resources. If the webhook service is down, either:
- Fix the webhook service
- Temporarily set
failurePolicy: Ignore - Delete the webhook configuration entirely if it's non-critical
The Nuclear Option: Bulk Force-Delete
When you've got multiple pods stuck (say after a failed node or a bad deployment), delete them all at once:
# Force delete all Terminating pods in a namespace
kubectl get pods -n my-namespace | grep Terminating | awk '{print $1}' | xargs kubectl delete pod -n my-namespace --grace-period=0 --forceFor all namespaces:
kubectl get pods --all-namespaces | grep Terminating | awk '{print "-n " $1 " " $2}' | xargs -L1 kubectl delete pod --grace-period=0 --forceUse this carefully in production. Force deletion skips graceful shutdown, which means in-flight requests get dropped and data might not be flushed.
Preventing Stuck Terminating Pods
Once you've fixed the immediate issue, prevent it from happening again:
Set Reasonable terminationGracePeriodSeconds
The default is 30 seconds. For most apps, that's fine. For apps with long-running connections or cleanup tasks, increase it — but don't go overboard:
spec:
terminationGracePeriodSeconds: 60Handle SIGTERM in Your Application
Your app should catch SIGTERM and shut down cleanly — stop accepting new requests, drain existing connections, flush buffers, then exit. If your app ignores SIGTERM, it'll always wait for the full grace period before SIGKILL.
Avoid Unnecessary Finalizers
Only add finalizers when you genuinely need guaranteed cleanup. Every finalizer is a potential hang point if its controller misbehaves.
Monitor Node Health
Set up alerts for NotReady nodes. The faster you detect a dead node, the faster you can force-delete its orphaned pods. If you want to go deep on Kubernetes monitoring and observability, KodeKloud's CKA/CKAD courses cover this thoroughly — including how to set up proper node health checks and pod disruption budgets.
Use Pod Disruption Budgets
PDBs won't directly prevent stuck Terminating pods, but they ensure that voluntary disruptions (like node drains) don't kill too many pods at once, reducing the blast radius when things go wrong:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: my-app-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: my-appQuick Reference Cheat Sheet
| Symptom | Likely Cause | Fix |
|---|---|---|
| Pod has finalizers | Controller not cleaning up | kubectl patch pod <name> -p '{"metadata":{"finalizers":null}}' |
| Node is NotReady | Kubelet unreachable | Force delete or delete the node |
| Volume errors in events | Stuck volume detach | Delete stale volumeattachment objects |
| preStop hook timeout | Hook target unreachable | Force delete, add timeouts to hooks |
| All deletes hanging | Broken admission webhook | Fix or remove the webhook |
Wrapping Up
Pods stuck in Terminating almost always come down to one of five things: finalizers, dead nodes, stuck volumes, hanging preStop hooks, or broken webhooks. The diagnosis path is always the same — check kubectl describe, look at finalizers, check node status, and look at events.
Force deletion with --grace-period=0 --force is your escape hatch, but understanding the root cause is what separates firefighting from actually fixing the problem. Get comfortable reading pod YAML and you'll resolve these in under a minute every time.
If you're preparing for the CKA or CKAD exam, troubleshooting stuck pods is a common scenario. KodeKloud's hands-on labs are the best way to practice these exact situations in a real cluster environment. And if you need a managed Kubernetes cluster to experiment with, DigitalOcean's DOKS spins up in minutes and is perfect for learning without the AWS complexity.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
AWS EKS Pods Stuck in Pending State: Causes and Fixes
Pods stuck in Pending on EKS are caused by a handful of known issues — insufficient node capacity, taint mismatches, PVC problems, and more. Here's how to diagnose and fix each one.
cert-manager Certificate Not Ready: Causes and Fixes
cert-manager Certificate stuck in a non-Ready state is a common Kubernetes TLS issue. This guide covers every root cause — DNS challenges, RBAC, rate limits, and issuer problems — with step-by-step fixes.
CI/CD Pipeline Is Broken: How to Debug and Fix GitHub Actions, Jenkins & ArgoCD Failures (2026)
Your CI/CD pipeline failed and you don't know why. This complete debugging guide covers GitHub Actions, Jenkins, and ArgoCD failures with real error messages and step-by-step fixes.