All Articles

Helm Upgrade Failed: 'has no deployed releases' — How to Fix in 2026

Fix the common Helm error 'has no deployed releases' that blocks upgrades. Step-by-step diagnosis and 4 proven solutions including history cleanup and force replacement.

DevOpsBoysMar 24, 20265 min read
Share:Tweet

You run helm upgrade on a release that you know exists, and Helm slaps you with this:

bash
Error: UPGRADE FAILED: "my-app" has no deployed releases

The release is there. You can see the pods running. But Helm refuses to touch it. Here is exactly why this happens and how to fix it.

Why This Error Happens

Helm tracks every release as a Kubernetes Secret (or ConfigMap in older setups). Each revision has a status field. The valid statuses are:

StatusMeaning
deployedCurrent active release
supersededPrevious successful release
failedRelease that failed to install or upgrade
pending-installInstall started but never completed
pending-upgradeUpgrade started but never completed
pending-rollbackRollback started but never completed
uninstallingUninstall in progress

When you run helm upgrade, Helm looks for a revision with status deployed. If every revision is failed, pending-install, or pending-upgrade, Helm cannot find a base to upgrade from — and throws this error.

The most common causes:

  1. First install failedhelm install timed out or crashed, leaving the release in pending-install
  2. Every upgrade failed — multiple failed upgrades with no successful one in between
  3. Interrupted operations — a CI/CD pipeline was killed mid-deploy, leaving a pending-upgrade
  4. Helm 2 to 3 migration artifacts — leftover state from incomplete migrations

Step 1: Check the Release History

bash
helm history my-app -n my-namespace

You will see something like:

REVISION  STATUS          CHART           DESCRIPTION
1         pending-install my-app-1.0.0    Install complete

Or:

REVISION  STATUS  CHART           DESCRIPTION
1         failed  my-app-1.0.0    Install complete
2         failed  my-app-1.1.0    Upgrade "my-app" failed
3         failed  my-app-1.2.0    Upgrade "my-app" failed

No revision says deployed. That is the problem.

If helm history returns nothing, the release secrets may have been deleted. Check directly:

bash
kubectl get secrets -n my-namespace -l owner=helm,name=my-app

Step 2: Check What Is Actually Running

Before you fix anything, verify the current state of your workloads:

bash
kubectl get all -n my-namespace -l app.kubernetes.io/instance=my-app

This tells you whether pods are running (and should be preserved) or whether the release is truly broken.

Fix 1: Rollback to a Working Revision

If helm history shows at least one deployed or superseded revision buried under failed ones:

bash
helm rollback my-app 1 -n my-namespace

This sets revision 1 back to deployed status, and then helm upgrade will work again.

Fix 2: Uninstall and Reinstall (Clean Slate)

If there is no good revision to roll back to, or if the release never successfully installed:

bash
# Remove the broken release
helm uninstall my-app -n my-namespace
 
# Reinstall
helm install my-app ./my-chart -n my-namespace -f values.yaml

If helm uninstall itself fails, force-remove the release secrets:

bash
kubectl delete secrets -n my-namespace -l owner=helm,name=my-app

Then run helm install fresh.

Warning: This will delete and recreate all resources managed by the chart. If you have PersistentVolumeClaims with Retain policy, your data is safe. If not, back up first.

Fix 3: Force Upgrade with --install

The --install flag tells Helm to install if the release does not exist, and combined with --force, it replaces resources:

bash
helm upgrade --install my-app ./my-chart \
  -n my-namespace \
  -f values.yaml \
  --force

This works in many cases, but if the release is stuck in pending-install, you may still need to clean up the secrets first (see Fix 2).

Fix 4: Manual Secret Surgery

For cases where you cannot uninstall (production release with external dependencies, PVCs you cannot recreate), you can manually patch the release secret:

bash
# Find the latest release secret
kubectl get secrets -n my-namespace -l owner=helm,name=my-app \
  --sort-by=.metadata.creationTimestamp
 
# The secret name follows the pattern: sh.helm.release.v1.my-app.v<revision>
# Decode it
kubectl get secret sh.helm.release.v1.my-app.v3 -n my-namespace \
  -o jsonpath='{.data.release}' | base64 -d | base64 -d | gzip -d > release.json

Edit release.json and change the status field from failed or pending-install to deployed:

bash
# Find and replace the status
sed -i 's/"status":"failed"/"status":"deployed"/' release.json
sed -i 's/"status":"pending-install"/"status":"deployed"/' release.json

Re-encode and patch:

bash
cat release.json | gzip | base64 | base64 > encoded.txt
 
kubectl patch secret sh.helm.release.v1.my-app.v3 -n my-namespace \
  --type='json' \
  -p="[{\"op\":\"replace\",\"path\":\"/data/release\",\"value\":\"$(cat encoded.txt)\"}]"

Now helm upgrade will find a deployed revision and proceed normally.

Use this only as a last resort. It is fragile and version-specific.

Preventing This in CI/CD

Most teams hit this error because their CI/CD pipeline was interrupted mid-deploy. Add these flags to your Helm commands:

bash
helm upgrade --install my-app ./my-chart \
  -n my-namespace \
  -f values.yaml \
  --atomic \
  --timeout 10m \
  --wait

The key flags:

  • --atomic: If the upgrade fails, Helm automatically rolls back to the previous release. This ensures you always have a deployed revision.
  • --timeout: Sets a deadline so Helm does not hang forever.
  • --wait: Waits for pods to be ready before marking the release as deployed.

With --atomic, even if your pipeline is killed, Helm will have either completed the upgrade or rolled back cleanly.

Handling This in ArgoCD

If you use ArgoCD for GitOps deployments, the same issue can occur when syncs fail repeatedly. ArgoCD has its own retry logic, but the underlying Helm state can still get corrupted.

To fix it in ArgoCD:

bash
# Check the Helm state
kubectl get secrets -n my-namespace -l owner=helm,name=my-app
 
# If stuck, delete the failed secrets and let ArgoCD resync
kubectl delete secrets -n my-namespace -l owner=helm,name=my-app,status=pending-install
 
# Trigger a hard refresh in ArgoCD
argocd app get my-app --hard-refresh
argocd app sync my-app

If you are managing complex Helm deployments and want to deepen your Kubernetes and Helm skills, the hands-on labs at KodeKloud cover Helm chart development, debugging, and production patterns extensively.

Quick Reference Table

ScenarioFixRisk Level
History shows a superseded revisionhelm rollback my-app <rev>Low
First install failed (pending-install)helm uninstall then helm installMedium
Multiple failed upgrades, no data to preservehelm uninstall then helm installMedium
Production release, cannot deleteManual secret surgery (Fix 4)High
CI/CD keeps causing thisAdd --atomic --wait --timeoutPrevention

Wrapping Up

The "has no deployed releases" error is Helm telling you that its internal state has no successful revision to build on. The fix depends on whether you can afford to delete and recreate the release or need to preserve the existing state.

For most development and staging environments, helm uninstall followed by helm install is the fastest path. For production, rollback first, and only resort to secret surgery if nothing else works.

The real fix is prevention — always use --atomic in CI/CD pipelines. It is one flag that saves hours of debugging.

For hands-on practice with Helm, Kubernetes troubleshooting, and GitOps workflows, check out KodeKloud's DevOps learning paths. If you need a managed Kubernetes cluster to test on, DigitalOcean's DOKS gives you a production-ready cluster in minutes.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments