Nginx Ingress 502 Bad Gateway — How to Fix It (2026)
Getting 502 Bad Gateway from your Nginx Ingress Controller? Here's every cause and the exact fix for each one.
Your pods are running. Your Service exists. But every request through the Ingress returns 502 Bad Gateway. Here's how to find and fix the root cause fast.
What 502 Means
502 = Nginx received your request but got an invalid or no response from the upstream pod.
- 502 → upstream gave a bad response (pod issue)
- 503 → no healthy upstream (selector mismatch, no ready pods)
- 504 → upstream didn't respond in time (timeout)
Step 1: Check Pod Readiness
kubectl get pods -n your-namespace
kubectl describe pod <pod> -n your-namespaceNginx only routes to pods that pass readiness probes. A pod can be Running but still not ready (0/1).
# Test readiness endpoint directly
kubectl exec -it <pod> -- curl -f http://localhost:8080/healthStep 2: Check Service Endpoints
kubectl get endpoints your-service -n your-namespaceIf it shows <none> — your Service selector doesn't match any pods.
# Compare pod labels vs service selector
kubectl get pods -n your-namespace --show-labels
kubectl describe svc your-service -n your-namespace | grep SelectorLabels are case-sensitive. Fix the mismatch.
Step 3: Verify the Port Chain
Ingress servicePort → Service port → containerPort → app listening port — every link must match.
kubectl describe ingress your-ingress -n your-namespace
kubectl describe svc your-service -n your-namespace | grep Port
kubectl exec -it <pod> -- ss -tlnpCase: Keepalive Mismatch
Node.js and Go HTTP servers can close keepalive connections while Nginx still holds them.
# Add to your Ingress annotations
nginx.ingress.kubernetes.io/upstream-keepalive-connections: "0"Or increase your app's keepalive timeout above Nginx's 75s default:
// Node.js
server.keepAliveTimeout = 120000;
server.headersTimeout = 120001;Case: NetworkPolicy Blocking Ingress
If you have NetworkPolicy, it may block traffic from the Ingress Controller:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-controller
spec:
podSelector:
matchLabels:
app: your-app
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginxAlways Check Nginx Logs
kubectl logs -n ingress-nginx <nginx-pod> | grep "502\|upstream\|error"Common messages:
connect() failed (111: Connection refused)→ port wrong or app not listeningupstream prematurely closed connection→ keepalive issue or app crash
Quick Checklist
| Check | Command |
|---|---|
| Pods ready? | kubectl get pods -n ns |
| Endpoints exist? | kubectl get endpoints svc -n ns |
| Port correct? | kubectl describe svc -n ns |
| App responding? | kubectl exec pod -- curl localhost:PORT |
| NetworkPolicy? | kubectl get netpol -n ns |
| Nginx logs | kubectl logs -n ingress-nginx <pod> |
Work through this list. You'll find the issue before the end.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Ingress-NGINX Is Being Retired: How to Migrate to Gateway API Before It Breaks
Ingress-NGINX is officially being retired. Your ingress rules will stop working. Here's the step-by-step migration plan to Kubernetes Gateway API before it's too late.
Kubernetes DNS Not Working: How to Fix CoreDNS Failures in Production
Pods can't resolve hostnames? Getting NXDOMAIN or 'no such host' errors? Here's how to diagnose and fix CoreDNS issues in Kubernetes step by step.
Kubernetes DNS Resolution Failures — How to Fix CoreDNS Issues
Fix Kubernetes DNS resolution failures caused by CoreDNS misconfigurations, ndots issues, and pod DNS policies. Real troubleshooting scenarios with step-by-step solutions.