Kubernetes ConfigMap and Secret Not Mounting in Pod — Every Fix (2026)
Your pod starts but the ConfigMap or Secret isn't mounted as expected — missing files, stale values, wrong keys, or permission errors. Here's every cause and the exact fix.
Your pod is running but the ConfigMap or Secret isn't showing up correctly — files are missing, values are wrong, or the pod refuses to start because the mount failed. Here's every cause and the exact fix.
How ConfigMap/Secret Mounting Works
Kubernetes mounts ConfigMaps and Secrets in two ways:
- Environment variables — values injected into the container's environment
- Volume mounts — files written to a directory inside the container
Both have different failure modes.
Error 1: Pod Stuck in CreateContainerConfigError
Symptom:
kubectl get pods
# NAME READY STATUS RESTARTS
# myapp 0/1 CreateContainerConfigError 0
kubectl describe pod myapp
# Events:
# Error: couldn't find key DB_HOST in ConfigMap default/app-configCause: You referenced a key in the ConfigMap that doesn't exist.
# BAD — key name mismatch
envFrom:
- configMapRef:
name: app-config
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: DB_HOST # ← This key must exist in the ConfigMapCheck what keys actually exist:
kubectl get configmap app-config -o jsonpath='{.data}' | jq keysFix: Either add the missing key to the ConfigMap or correct the key name in the pod spec.
# Add the missing key
kubectl patch configmap app-config \
--type merge \
-p '{"data": {"DB_HOST": "postgres.default.svc.cluster.local"}}'Error 2: Secret Not Found — Pod Won't Start
Symptom:
Error: secret "app-secrets" not found
Cause: The Secret doesn't exist in the same namespace as the pod. Secrets are namespace-scoped.
# Check which namespace your pod is in
kubectl get pod myapp -o jsonpath='{.metadata.namespace}'
# Check if secret exists in THAT namespace
kubectl get secret app-secrets -n your-namespaceFix — Create the secret in the correct namespace:
kubectl create secret generic app-secrets \
--namespace your-namespace \
--from-literal=DB_PASSWORD=secretpassword \
--from-literal=API_KEY=myapikeyCommon gotcha: You created the secret in default namespace but the pod runs in production namespace.
Error 3: ConfigMap Updated But Pod Still Shows Old Values
Symptom: You updated the ConfigMap with kubectl apply but the pod environment variables still show the old values.
Cause: Environment variables injected via envFrom or env.valueFrom are baked in at pod start time. They do NOT update when the ConfigMap changes.
# You updated the configmap
kubectl patch configmap app-config \
--type merge \
-p '{"data": {"LOG_LEVEL": "debug"}}'
# But existing pods still see the old value
kubectl exec myapp -- env | grep LOG_LEVEL
# LOG_LEVEL=info ← still the old valueFix: Restart the pod to pick up new values:
kubectl rollout restart deployment/myappFor volume-mounted ConfigMaps: Kubernetes syncs volume-mounted files automatically within 1–2 minutes (controlled by kubelet sync period). You don't need to restart for volume mounts — but in-process cache in your app may still hold stale values.
Error 4: Volume-Mounted ConfigMap Shows Wrong Files
Symptom: You mount a ConfigMap as a volume but only some files appear, or the wrong content is there.
volumes:
- name: config-volume
configMap:
name: app-config
items: # ← Only specified items are mounted
- key: nginx.conf
path: nginx.confIf you use items, only the specified keys are mounted. Omit items to mount all keys as files.
# Mount ALL keys as files
volumes:
- name: config-volume
configMap:
name: app-config # No 'items' — all keys become filesCheck what files are actually mounted:
kubectl exec myapp -- ls /etc/config/
kubectl exec myapp -- cat /etc/config/nginx.confError 5: Secret Volume Files Have Wrong Permissions
Symptom: App can't read the mounted secret file:
permission denied: /etc/secrets/tls.key
Cause: By default, Kubernetes mounts secret files with permissions 0644. Some apps (like SSH, TLS certificates) require 0600.
Fix — Set defaultMode on the volume:
volumes:
- name: tls-certs
secret:
secretName: tls-secret
defaultMode: 0400 # Owner read-onlyOr per-file permissions:
volumes:
- name: tls-certs
secret:
secretName: tls-secret
items:
- key: tls.crt
path: tls.crt
mode: 0644
- key: tls.key
path: tls.key
mode: 0400Error 6: subPath Mount Overwrites Entire Directory
Symptom: You mount a single ConfigMap key using subPath but other files in that directory disappear.
Cause: Without subPath, mounting a volume replaces the entire directory. With subPath, only the specific file is mounted — but there's a gotcha.
# WITHOUT subPath — replaces /etc/nginx/ entirely
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/
# WITH subPath — only mounts nginx.conf, leaves rest of /etc/nginx/ intact
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.confImportant subPath limitation: When using subPath, the file is not automatically updated when the ConfigMap changes, even for volume mounts. You must restart the pod.
Error 7: ConfigMap Too Large (1MB Limit)
Symptom:
Request entity too large: limit is 3145728
or the ConfigMap exists but truncated.
Cause: Kubernetes has a 1MB limit per ConfigMap/Secret.
Fix: For large files (binary assets, large configs), use a PersistentVolume instead, or split the ConfigMap:
# Check ConfigMap size
kubectl get configmap my-config -o json | wc -cFor binary data in Secrets, use binaryData instead of data (base64 encoded, still 1MB limit).
Error 8: optional: true Silently Missing
Symptom: Your pod starts fine but environment variable is empty, and you're not sure why.
env:
- name: FEATURE_FLAG
valueFrom:
configMapKeyRef:
name: feature-flags
key: ENABLE_DARK_MODE
optional: true # ← Pod starts even if ConfigMap/key doesn't existoptional: true means if the ConfigMap or key doesn't exist, the env var is just empty instead of crashing the pod. Check if the ConfigMap actually has the key:
kubectl get configmap feature-flags -o yamlDebugging Checklist
# 1. Check pod events
kubectl describe pod <pod-name> | grep -A 20 Events
# 2. Verify ConfigMap/Secret exists in correct namespace
kubectl get configmap <name> -n <namespace>
kubectl get secret <name> -n <namespace>
# 3. Check exact keys in ConfigMap
kubectl get configmap <name> -o jsonpath='{.data}' | jq .
# 4. Verify what's actually mounted in the running pod
kubectl exec <pod-name> -- env | grep -i <your-var>
kubectl exec <pod-name> -- ls /path/to/mount/
kubectl exec <pod-name> -- cat /path/to/mount/<file>
# 5. Check pod spec for typos
kubectl get pod <pod-name> -o yaml | grep -A 10 configMapKeyRef
kubectl get pod <pod-name> -o yaml | grep -A 10 secretKeyRef| Error | Cause | Fix |
|---|---|---|
CreateContainerConfigError | Missing key in ConfigMap/Secret | Add key or fix name |
| Secret not found | Wrong namespace | Create in correct namespace |
| Stale env vars | Env vars baked at pod start | kubectl rollout restart |
| Missing files in volume | items filtering | Remove items or add missing key |
| Permission denied | Wrong file mode | Set defaultMode: 0400 |
Files missing after subPath | Volume replaced directory | Use subPath correctly |
| Empty optional env var | ConfigMap/key doesn't exist | Check ConfigMap exists |
ConfigMap and Secret issues are almost always a namespace mismatch, a key name typo, or forgetting to restart the pod after an update.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
AWS EKS Pods Stuck in Pending State: Causes and Fixes
Pods stuck in Pending on EKS are caused by a handful of known issues — insufficient node capacity, taint mismatches, PVC problems, and more. Here's how to diagnose and fix each one.
AWS EKS Worker Nodes Not Joining the Cluster: Complete Fix Guide
EKS worker nodes stuck in NotReady or not appearing at all? Here are all the causes and step-by-step fixes for node bootstrap failures.
AWS RDS Connection Timeout from EKS Pods — How to Fix It
EKS pods can't connect to RDS? Fix RDS connection timeouts from Kubernetes — covers security groups, VPC peering, subnet routing, and IAM auth issues.