All Articles

Kubernetes Pods Stuck in Terminating State: How to Fix It in 2026

Pods won't delete and stuck in Terminating? Here's how to diagnose finalizers, graceful shutdown issues, and force-delete stuck pods step by step.

DevOpsBoysMar 23, 20267 min read
Share:Tweet

You run kubectl delete pod my-app-pod and wait. And wait. The pod just sits there in Terminating state, refusing to disappear. You try again. Nothing. You stare at kubectl get pods hoping it'll resolve itself. It won't.

This is one of the most common — and most annoying — issues in Kubernetes. Let's break down exactly why it happens and how to fix it every single time.

Why Pods Get Stuck in Terminating

When you delete a pod, Kubernetes doesn't just nuke it. There's a graceful shutdown sequence:

  1. The pod's state is set to Terminating
  2. The preStop hook runs (if defined)
  3. SIGTERM is sent to the container processes
  4. Kubernetes waits up to terminationGracePeriodSeconds (default: 30s)
  5. SIGKILL is sent if the process is still running
  6. Finalizers are processed
  7. The pod object is removed from the API server

If any of these steps gets stuck, the pod stays in Terminating forever. The usual culprits are:

  • Finalizers that never complete
  • PVC/PV finalizers blocking deletion
  • preStop hooks that hang indefinitely
  • Node is unreachable (the kubelet can't do the actual cleanup)
  • Stuck volume unmounts preventing container removal
  • A broken admission webhook blocking the delete operation

Let's diagnose and fix each one.

Step 1: Check What's Actually Happening

First, get the full picture of the stuck pod:

bash
kubectl get pod my-app-pod -o yaml

Look at three things:

  • metadata.finalizers — any finalizers listed?
  • metadata.deletionTimestamp — when was deletion requested?
  • status.conditions — any error conditions?

A quicker way to check finalizers specifically:

bash
kubectl get pod my-app-pod -o jsonpath='{.metadata.finalizers}'

If you see finalizers listed, that's likely your problem. If the finalizer list is empty, the issue is elsewhere — probably the node or a volume.

Also check the pod's events:

bash
kubectl describe pod my-app-pod

Scroll to the Events section at the bottom. You'll often see messages like FailedKillPod, FailedSync, or volume-related errors that tell you exactly what's blocking termination.

Step 2: Fixing Finalizer-Stuck Pods

Finalizers are the number one reason pods get stuck in Terminating. A finalizer is a key in metadata.finalizers that tells Kubernetes: "Don't delete this object until I say it's okay."

If the controller responsible for a finalizer is broken, deleted, or misconfigured, the finalizer never gets removed and the pod hangs forever.

Remove the Finalizer Manually

bash
kubectl patch pod my-app-pod -p '{"metadata":{"finalizers":null}}'

That's it. The pod should disappear within seconds. If you want to remove a specific finalizer while keeping others:

bash
kubectl patch pod my-app-pod --type='json' -p='[{"op": "remove", "path": "/metadata/finalizers/0"}]'

PVC/PV Finalizer Blocking Pod Deletion

Sometimes the pod itself doesn't have finalizers, but the PVC it's attached to does. The kubernetes.io/pvc-protection finalizer prevents PVC deletion while a pod is using it — but if there's a circular dependency, things get stuck.

Check your PVCs:

bash
kubectl get pvc -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.finalizers}{"\n"}{end}'

If a PVC is also stuck in Terminating, patch it:

bash
kubectl patch pvc my-pvc -p '{"metadata":{"finalizers":null}}'
⚠️

Warning: Removing finalizers manually skips the cleanup that finalizer was supposed to do. For PVC protection finalizers, make sure no other pod is actively using that volume before you remove it.

Step 3: Handling Node-Level Issues

If the node where the pod is running is down or unreachable, the kubelet can't execute the graceful shutdown. The pod will sit in Terminating until the node comes back or you intervene.

Check the node status:

bash
kubectl get nodes
kubectl describe node <node-name>

If the node shows NotReady:

Option A: Wait for the Node to Recover

If the node is temporarily unreachable (network blip, maintenance), the pod will terminate once the kubelet reconnects. The default pod-eviction-timeout is 5 minutes.

Option B: Force Delete the Pod

If the node isn't coming back:

bash
kubectl delete pod my-app-pod --grace-period=0 --force

This removes the pod from the API server immediately. The actual container may still be running on the dead node — but Kubernetes won't schedule a replacement until the old pod object is gone, so force deletion is the right call here.

Option C: Delete the Node

If the node is permanently gone (terminated cloud instance, dead hardware):

bash
kubectl delete node <node-name>

This will trigger cleanup of all pods that were on that node.

Step 4: Stuck Volumes

Volume unmount failures are a sneaky cause of Terminating pods. If a volume (especially cloud block storage like EBS, Persistent Disks, or Cinder volumes) can't be detached from the node, the pod can't fully terminate.

Check for volume attachment issues:

bash
kubectl get volumeattachments

Look for attachments referencing the stuck pod's PV. If you see stale attachments:

bash
kubectl delete volumeattachment <attachment-name>

On managed Kubernetes services like DigitalOcean Kubernetes or EKS, you might also need to check the cloud provider's console to verify the volume isn't stuck in an "attaching" or "detaching" state at the infrastructure level.

Step 5: preStop Hooks That Hang

If your pod spec has a preStop hook that makes an HTTP call or runs a script, and that hook hangs, the pod won't proceed with shutdown until terminationGracePeriodSeconds expires.

Check your pod spec:

bash
kubectl get pod my-app-pod -o jsonpath='{.spec.containers[*].lifecycle}'

If you see a preStop hook making a call to a service that's down, that's your blocker. The fix:

  1. Short term: Force delete the pod with --grace-period=0 --force
  2. Long term: Add timeouts to your preStop hooks and make sure they're resilient to failures

Example of a safer preStop hook:

yaml
lifecycle:
  preStop:
    exec:
      command:
        - /bin/sh
        - -c
        - "curl -s --max-time 5 http://localhost:8080/shutdown || true"

The --max-time 5 and || true ensure the hook completes even if the endpoint is unreachable.

Step 6: Admission Webhooks Blocking Deletion

This one is rare but infuriating. If you have a ValidatingWebhookConfiguration or MutatingWebhookConfiguration that intercepts DELETE operations and the webhook server is down, every delete request will hang.

Check your webhooks:

bash
kubectl get validatingwebhookconfiguration
kubectl get mutatingwebhookconfiguration

Look for webhooks with failurePolicy: Fail that target pod resources. If the webhook service is down, either:

  1. Fix the webhook service
  2. Temporarily set failurePolicy: Ignore
  3. Delete the webhook configuration entirely if it's non-critical

The Nuclear Option: Bulk Force-Delete

When you've got multiple pods stuck (say after a failed node or a bad deployment), delete them all at once:

bash
# Force delete all Terminating pods in a namespace
kubectl get pods -n my-namespace | grep Terminating | awk '{print $1}' | xargs kubectl delete pod -n my-namespace --grace-period=0 --force

For all namespaces:

bash
kubectl get pods --all-namespaces | grep Terminating | awk '{print "-n " $1 " " $2}' | xargs -L1 kubectl delete pod --grace-period=0 --force

Use this carefully in production. Force deletion skips graceful shutdown, which means in-flight requests get dropped and data might not be flushed.

Preventing Stuck Terminating Pods

Once you've fixed the immediate issue, prevent it from happening again:

Set Reasonable terminationGracePeriodSeconds

The default is 30 seconds. For most apps, that's fine. For apps with long-running connections or cleanup tasks, increase it — but don't go overboard:

yaml
spec:
  terminationGracePeriodSeconds: 60

Handle SIGTERM in Your Application

Your app should catch SIGTERM and shut down cleanly — stop accepting new requests, drain existing connections, flush buffers, then exit. If your app ignores SIGTERM, it'll always wait for the full grace period before SIGKILL.

Avoid Unnecessary Finalizers

Only add finalizers when you genuinely need guaranteed cleanup. Every finalizer is a potential hang point if its controller misbehaves.

Monitor Node Health

Set up alerts for NotReady nodes. The faster you detect a dead node, the faster you can force-delete its orphaned pods. If you want to go deep on Kubernetes monitoring and observability, KodeKloud's CKA/CKAD courses cover this thoroughly — including how to set up proper node health checks and pod disruption budgets.

Use Pod Disruption Budgets

PDBs won't directly prevent stuck Terminating pods, but they ensure that voluntary disruptions (like node drains) don't kill too many pods at once, reducing the blast radius when things go wrong:

yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: my-app-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: my-app

Quick Reference Cheat Sheet

SymptomLikely CauseFix
Pod has finalizersController not cleaning upkubectl patch pod <name> -p '{"metadata":{"finalizers":null}}'
Node is NotReadyKubelet unreachableForce delete or delete the node
Volume errors in eventsStuck volume detachDelete stale volumeattachment objects
preStop hook timeoutHook target unreachableForce delete, add timeouts to hooks
All deletes hangingBroken admission webhookFix or remove the webhook

Wrapping Up

Pods stuck in Terminating almost always come down to one of five things: finalizers, dead nodes, stuck volumes, hanging preStop hooks, or broken webhooks. The diagnosis path is always the same — check kubectl describe, look at finalizers, check node status, and look at events.

Force deletion with --grace-period=0 --force is your escape hatch, but understanding the root cause is what separates firefighting from actually fixing the problem. Get comfortable reading pod YAML and you'll resolve these in under a minute every time.

If you're preparing for the CKA or CKAD exam, troubleshooting stuck pods is a common scenario. KodeKloud's hands-on labs are the best way to practice these exact situations in a real cluster environment. And if you need a managed Kubernetes cluster to experiment with, DigitalOcean's DOKS spins up in minutes and is perfect for learning without the AWS complexity.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments