Helm Upgrade Failed: 'has no deployed releases' — How to Fix in 2026
Fix the common Helm error 'has no deployed releases' that blocks upgrades. Step-by-step diagnosis and 4 proven solutions including history cleanup and force replacement.
You run helm upgrade on a release that you know exists, and Helm slaps you with this:
Error: UPGRADE FAILED: "my-app" has no deployed releasesThe release is there. You can see the pods running. But Helm refuses to touch it. Here is exactly why this happens and how to fix it.
Why This Error Happens
Helm tracks every release as a Kubernetes Secret (or ConfigMap in older setups). Each revision has a status field. The valid statuses are:
| Status | Meaning |
|---|---|
deployed | Current active release |
superseded | Previous successful release |
failed | Release that failed to install or upgrade |
pending-install | Install started but never completed |
pending-upgrade | Upgrade started but never completed |
pending-rollback | Rollback started but never completed |
uninstalling | Uninstall in progress |
When you run helm upgrade, Helm looks for a revision with status deployed. If every revision is failed, pending-install, or pending-upgrade, Helm cannot find a base to upgrade from — and throws this error.
The most common causes:
- First install failed —
helm installtimed out or crashed, leaving the release inpending-install - Every upgrade failed — multiple failed upgrades with no successful one in between
- Interrupted operations — a CI/CD pipeline was killed mid-deploy, leaving a
pending-upgrade - Helm 2 to 3 migration artifacts — leftover state from incomplete migrations
Step 1: Check the Release History
helm history my-app -n my-namespaceYou will see something like:
REVISION STATUS CHART DESCRIPTION
1 pending-install my-app-1.0.0 Install complete
Or:
REVISION STATUS CHART DESCRIPTION
1 failed my-app-1.0.0 Install complete
2 failed my-app-1.1.0 Upgrade "my-app" failed
3 failed my-app-1.2.0 Upgrade "my-app" failed
No revision says deployed. That is the problem.
If helm history returns nothing, the release secrets may have been deleted. Check directly:
kubectl get secrets -n my-namespace -l owner=helm,name=my-appStep 2: Check What Is Actually Running
Before you fix anything, verify the current state of your workloads:
kubectl get all -n my-namespace -l app.kubernetes.io/instance=my-appThis tells you whether pods are running (and should be preserved) or whether the release is truly broken.
Fix 1: Rollback to a Working Revision
If helm history shows at least one deployed or superseded revision buried under failed ones:
helm rollback my-app 1 -n my-namespaceThis sets revision 1 back to deployed status, and then helm upgrade will work again.
Fix 2: Uninstall and Reinstall (Clean Slate)
If there is no good revision to roll back to, or if the release never successfully installed:
# Remove the broken release
helm uninstall my-app -n my-namespace
# Reinstall
helm install my-app ./my-chart -n my-namespace -f values.yamlIf helm uninstall itself fails, force-remove the release secrets:
kubectl delete secrets -n my-namespace -l owner=helm,name=my-appThen run helm install fresh.
Warning: This will delete and recreate all resources managed by the chart. If you have PersistentVolumeClaims with Retain policy, your data is safe. If not, back up first.
Fix 3: Force Upgrade with --install
The --install flag tells Helm to install if the release does not exist, and combined with --force, it replaces resources:
helm upgrade --install my-app ./my-chart \
-n my-namespace \
-f values.yaml \
--forceThis works in many cases, but if the release is stuck in pending-install, you may still need to clean up the secrets first (see Fix 2).
Fix 4: Manual Secret Surgery
For cases where you cannot uninstall (production release with external dependencies, PVCs you cannot recreate), you can manually patch the release secret:
# Find the latest release secret
kubectl get secrets -n my-namespace -l owner=helm,name=my-app \
--sort-by=.metadata.creationTimestamp
# The secret name follows the pattern: sh.helm.release.v1.my-app.v<revision>
# Decode it
kubectl get secret sh.helm.release.v1.my-app.v3 -n my-namespace \
-o jsonpath='{.data.release}' | base64 -d | base64 -d | gzip -d > release.jsonEdit release.json and change the status field from failed or pending-install to deployed:
# Find and replace the status
sed -i 's/"status":"failed"/"status":"deployed"/' release.json
sed -i 's/"status":"pending-install"/"status":"deployed"/' release.jsonRe-encode and patch:
cat release.json | gzip | base64 | base64 > encoded.txt
kubectl patch secret sh.helm.release.v1.my-app.v3 -n my-namespace \
--type='json' \
-p="[{\"op\":\"replace\",\"path\":\"/data/release\",\"value\":\"$(cat encoded.txt)\"}]"Now helm upgrade will find a deployed revision and proceed normally.
Use this only as a last resort. It is fragile and version-specific.
Preventing This in CI/CD
Most teams hit this error because their CI/CD pipeline was interrupted mid-deploy. Add these flags to your Helm commands:
helm upgrade --install my-app ./my-chart \
-n my-namespace \
-f values.yaml \
--atomic \
--timeout 10m \
--waitThe key flags:
--atomic: If the upgrade fails, Helm automatically rolls back to the previous release. This ensures you always have adeployedrevision.--timeout: Sets a deadline so Helm does not hang forever.--wait: Waits for pods to be ready before marking the release asdeployed.
With --atomic, even if your pipeline is killed, Helm will have either completed the upgrade or rolled back cleanly.
Handling This in ArgoCD
If you use ArgoCD for GitOps deployments, the same issue can occur when syncs fail repeatedly. ArgoCD has its own retry logic, but the underlying Helm state can still get corrupted.
To fix it in ArgoCD:
# Check the Helm state
kubectl get secrets -n my-namespace -l owner=helm,name=my-app
# If stuck, delete the failed secrets and let ArgoCD resync
kubectl delete secrets -n my-namespace -l owner=helm,name=my-app,status=pending-install
# Trigger a hard refresh in ArgoCD
argocd app get my-app --hard-refresh
argocd app sync my-appIf you are managing complex Helm deployments and want to deepen your Kubernetes and Helm skills, the hands-on labs at KodeKloud cover Helm chart development, debugging, and production patterns extensively.
Quick Reference Table
| Scenario | Fix | Risk Level |
|---|---|---|
History shows a superseded revision | helm rollback my-app <rev> | Low |
First install failed (pending-install) | helm uninstall then helm install | Medium |
| Multiple failed upgrades, no data to preserve | helm uninstall then helm install | Medium |
| Production release, cannot delete | Manual secret surgery (Fix 4) | High |
| CI/CD keeps causing this | Add --atomic --wait --timeout | Prevention |
Wrapping Up
The "has no deployed releases" error is Helm telling you that its internal state has no successful revision to build on. The fix depends on whether you can afford to delete and recreate the release or need to preserve the existing state.
For most development and staging environments, helm uninstall followed by helm install is the fastest path. For production, rollback first, and only resort to secret surgery if nothing else works.
The real fix is prevention — always use --atomic in CI/CD pipelines. It is one flag that saves hours of debugging.
For hands-on practice with Helm, Kubernetes troubleshooting, and GitOps workflows, check out KodeKloud's DevOps learning paths. If you need a managed Kubernetes cluster to test on, DigitalOcean's DOKS gives you a production-ready cluster in minutes.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Helm Chart Debugging Guide: 10 Common Errors and How to Fix Them (2026)
Helm upgrade failing silently? Release stuck in pending state? This guide covers the 10 most common Helm errors DevOps engineers hit in production — with exact commands and fixes.
Helm Upgrade Failed: Another Operation is in Progress — Fix It Fast
Getting 'Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress' in Helm? Here's exactly why it happens and how to fix it in under 2 minutes.
Helm Values Not Updating After helm upgrade — How to Fix It (2026)
Your helm upgrade ran successfully but nothing changed in the cluster. Here's every reason this happens and how to fix each one.