All Articles

Nginx Ingress 502 Bad Gateway — How to Fix It (2026)

Getting 502 Bad Gateway from your Nginx Ingress Controller? Here's every cause and the exact fix for each one.

DevOpsBoysApr 3, 20262 min read
Share:Tweet

Your pods are running. Your Service exists. But every request through the Ingress returns 502 Bad Gateway. Here's how to find and fix the root cause fast.


What 502 Means

502 = Nginx received your request but got an invalid or no response from the upstream pod.

  • 502 → upstream gave a bad response (pod issue)
  • 503 → no healthy upstream (selector mismatch, no ready pods)
  • 504 → upstream didn't respond in time (timeout)

Step 1: Check Pod Readiness

bash
kubectl get pods -n your-namespace
kubectl describe pod <pod> -n your-namespace

Nginx only routes to pods that pass readiness probes. A pod can be Running but still not ready (0/1).

bash
# Test readiness endpoint directly
kubectl exec -it <pod> -- curl -f http://localhost:8080/health

Step 2: Check Service Endpoints

bash
kubectl get endpoints your-service -n your-namespace

If it shows <none> — your Service selector doesn't match any pods.

bash
# Compare pod labels vs service selector
kubectl get pods -n your-namespace --show-labels
kubectl describe svc your-service -n your-namespace | grep Selector

Labels are case-sensitive. Fix the mismatch.


Step 3: Verify the Port Chain

Ingress servicePort → Service port → containerPort → app listening port — every link must match.

bash
kubectl describe ingress your-ingress -n your-namespace
kubectl describe svc your-service -n your-namespace | grep Port
kubectl exec -it <pod> -- ss -tlnp

Case: Keepalive Mismatch

Node.js and Go HTTP servers can close keepalive connections while Nginx still holds them.

yaml
# Add to your Ingress annotations
nginx.ingress.kubernetes.io/upstream-keepalive-connections: "0"

Or increase your app's keepalive timeout above Nginx's 75s default:

javascript
// Node.js
server.keepAliveTimeout = 120000;
server.headersTimeout = 120001;

Case: NetworkPolicy Blocking Ingress

If you have NetworkPolicy, it may block traffic from the Ingress Controller:

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-ingress-controller
spec:
  podSelector:
    matchLabels:
      app: your-app
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: ingress-nginx

Always Check Nginx Logs

bash
kubectl logs -n ingress-nginx <nginx-pod> | grep "502\|upstream\|error"

Common messages:

  • connect() failed (111: Connection refused) → port wrong or app not listening
  • upstream prematurely closed connection → keepalive issue or app crash

Quick Checklist

CheckCommand
Pods ready?kubectl get pods -n ns
Endpoints exist?kubectl get endpoints svc -n ns
Port correct?kubectl describe svc -n ns
App responding?kubectl exec pod -- curl localhost:PORT
NetworkPolicy?kubectl get netpol -n ns
Nginx logskubectl logs -n ingress-nginx <pod>

Work through this list. You'll find the issue before the end.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments