Helm Values Not Updating After helm upgrade — How to Fix It (2026)
Your helm upgrade ran successfully but nothing changed in the cluster. Here's every reason this happens and how to fix each one.
You run helm upgrade myapp ./chart --set image.tag=v2.0.0 — it says "Release upgraded successfully." But the pods are still running v1.0.0. Nothing changed.
This is maddening. Here's why it happens and how to fix it.
Why This Happens
There are five main reasons your Helm values don't take effect:
- The deployment didn't detect a change (no pod restart triggered)
- You're passing values the wrong way and they're being overridden
- The Helm chart has hard-coded values that ignore your input
- The pod pulled the old image from cache
- The release is stuck in a bad state
Let's go through each one.
Case 1: No Change Detected — Pod Didn't Restart
Kubernetes only restarts pods when the Deployment spec changes. If you changed a ConfigMap value but not image.tag, Kubernetes has no reason to restart anything.
Check if pods actually restarted:
kubectl rollout status deployment/myapp
kubectl describe pod myapp-xxx | grep ImageThe fix — force a rollout:
# Force restart without changing anything else
kubectl rollout restart deployment/myapp
# Or add an annotation that always changes
helm upgrade myapp ./chart \
--set "podAnnotations.restartedAt=$(date +%s)"Better fix — add this to your Helm chart's deployment template:
# templates/deployment.yaml
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}Now every time the ConfigMap changes, the pod annotation changes, triggering a restart automatically.
Case 2: Values Getting Overridden
If you're using multiple --values files, later files override earlier ones. If you're mixing --values and --set, they can conflict.
Check what values Helm actually used:
# See the values from the last release
helm get values myapp
# See ALL values (including defaults)
helm get values myapp --allThe values precedence (highest to lowest):
--setflags (highest priority)--set-stringflags--values/-ffiles (last file wins)- Chart's
values.yamldefaults (lowest priority)
Common mistake:
# Wrong — second -f overrides the first for any overlapping keys
helm upgrade myapp ./chart \
-f values-base.yaml \
-f values-prod.yaml \
--set image.tag=v2.0.0
# If values-prod.yaml has image.tag: v1.5.0,
# --set wins and you get v2.0.0
# But if values-prod.yaml has image.repository: myregistry/myapp
# and values-base.yaml also has image.repository: oldregistry/myapp
# values-prod.yaml wins for repositoryDebug it:
# Dry-run and see the rendered manifests
helm upgrade myapp ./chart --set image.tag=v2.0.0 --dry-run --debug | grep image:Case 3: Hard-Coded Values in the Chart
Someone wrote a chart with values hard-coded in the template instead of using {{ .Values.xxx }}.
Check the rendered templates:
helm template myapp ./chart --set image.tag=v2.0.0 | grep image:If you see image: myapp:v1.0.0 even after setting image.tag=v2.0.0, the chart template isn't using your value.
Find the hard-coded value:
grep -r "v1.0.0" ./chart/templates/Fix the template:
# Before (broken)
image: myapp:v1.0.0
# After (correct)
image: {{ .Values.image.repository }}:{{ .Values.image.tag | default "latest" }}Case 4: imagePullPolicy is Not Always
Kubernetes caches images. If the tag hasn't changed (e.g., you're using latest or the same tag), it won't pull the new image.
Check the current pull policy:
kubectl describe pod myapp-xxx | grep "Image Pull Policy"Fix it in your Helm values:
# values.yaml
image:
repository: myregistry/myapp
tag: "v2.0.0"
pullPolicy: Always # Force re-pull every restartOr better — never use latest tag in production. Use a unique tag per build (git SHA or build number):
helm upgrade myapp ./chart \
--set image.tag=$(git rev-parse --short HEAD)Case 5: Release Stuck in Pending State
If a previous upgrade failed halfway, Helm might lock the release.
Check release status:
helm list -n default
# NAME NAMESPACE REVISION UPDATED STATUS CHART
# myapp default 3 2026-04-01 10:15:00 pending-upgrade myapp-1.0.0A pending-upgrade or pending-install status means Helm won't apply new changes.
Fix: rollback first, then upgrade:
# Rollback to last successful release
helm rollback myapp
# Check status
helm list -n default
# STATUS should now be "deployed"
# Now upgrade again
helm upgrade myapp ./chart --set image.tag=v2.0.0Quick Debugging Workflow
# 1. What did Helm actually receive?
helm get values myapp
# 2. What would the manifests look like?
helm template myapp ./chart -f your-values.yaml --dry-run | grep -A5 "image:"
# 3. What's actually running in the cluster?
kubectl describe deployment myapp | grep Image
# 4. Are pods actually the latest version?
kubectl get pods -o jsonpath='{.items[*].spec.containers[*].image}'
# 5. Did the rollout happen?
kubectl rollout history deployment/myappPreventing This in CI/CD
In your pipeline, always verify the upgrade actually worked:
# GitHub Actions
- name: Helm Upgrade
run: |
helm upgrade myapp ./chart \
--set image.tag=${{ github.sha }} \
--wait \
--timeout 5m
- name: Verify Deployment
run: |
kubectl rollout status deployment/myapp --timeout=3m
DEPLOYED_IMAGE=$(kubectl get deployment myapp -o jsonpath='{.spec.template.spec.containers[0].image}')
echo "Deployed: $DEPLOYED_IMAGE"
echo "Expected: myapp:${{ github.sha }}"The --wait flag makes Helm block until the rollout completes. If pods fail to start, the upgrade fails and you know immediately.
Resources
- Helm Documentation — Upgrade — Official reference for all upgrade flags
- KodeKloud Helm Course — Hands-on labs covering Helm chart debugging in Kubernetes clusters
- Artifact Hub — Find community Helm charts to learn from
Helm value issues are almost always one of these five things. Start with helm get values to see what Helm actually has, then helm template --dry-run to see what the manifests will look like. The cluster state and Helm's view of the world are separate — always check both.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Helm Chart Debugging Guide: 10 Common Errors and How to Fix Them (2026)
Helm upgrade failing silently? Release stuck in pending state? This guide covers the 10 most common Helm errors DevOps engineers hit in production — with exact commands and fixes.
Helm Upgrade Failed: Another Operation is in Progress — Fix It Fast
Getting 'Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress' in Helm? Here's exactly why it happens and how to fix it in under 2 minutes.
Helm Upgrade Failed: 'has no deployed releases' — How to Fix in 2026
Fix the common Helm error 'has no deployed releases' that blocks upgrades. Step-by-step diagnosis and 4 proven solutions including history cleanup and force replacement.