NGINX Ingress Connection Timed Out — Fix Guide (2026)
Getting connection timeout or upstream timed out errors through NGINX Ingress? Here's how to debug and fix timeout issues between NGINX and your backend services.
upstream timed out (110: Connection timed out) through NGINX Ingress means the backend pod isn't responding within the configured timeout window. Here's the systematic fix.
Common Error Messages
# In browser
504 Gateway Timeout
# In NGINX Ingress pod logs
upstream timed out (110: Connection timed out) while reading response header from upstream
# Or
connect() failed (110: Connection timed out) while connecting to upstream
Step 1: Check If the Backend Is Actually Slow
Before tweaking NGINX, verify if your backend is genuinely slow:
# Exec into a pod in the same namespace and test directly
kubectl run curl-test --image=curlimages/curl --restart=Never -- sleep 3600
kubectl exec curl-test -- curl -v --max-time 30 \
http://my-service.my-namespace.svc.cluster.local:8080/api/health
# Check pod resource usage
kubectl top pods -n my-namespace
# Check if your app pod is throttled
kubectl describe pod my-app-xxx -n my-namespace | grep -A5 "Limits\|Requests"If the direct curl times out too, the issue is your application — not NGINX.
Fix 1: Increase NGINX Timeout Annotations
The most common fix — NGINX's default timeouts are often too low for slow operations (DB queries, ML inference, file processing):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
# How long to wait for backend to send response headers (default: 60s)
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
# How long to wait to establish connection to backend (default: 60s)
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
# How long to wait for backend to send data between packets (default: 60s)
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
# For large file uploads
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
spec:
ingressClassName: nginx
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080Values are in seconds. For ML inference or long-running APIs, go to 300–600 seconds.
Fix 2: Service Port Mismatch
The Ingress points to the right service but the wrong port — connection times out because nothing is listening:
# Verify service exists and ports match
kubectl get svc my-service -n my-namespace
# Check what port the pods actually listen on
kubectl get pods -n my-namespace -o yaml | grep containerPort
# Test the service directly
kubectl exec curl-test -- curl http://my-service.my-namespace:8080
# vs
kubectl exec curl-test -- curl http://my-service.my-namespace:3000Fix the Ingress backend port to match what the service actually exposes.
Fix 3: Readiness Probe Failing — NGINX Sends Traffic to Unhealthy Pods
If your readiness probe is configured wrong, pods might be in Ready state but not actually serving traffic:
# Check endpoint status
kubectl get endpoints my-service -n my-namespace
# If endpoints show "none" or wrong IPs — readiness probe is failing
kubectl describe pod my-app-xxx -n my-namespace | grep -A10 "Readiness"
kubectl logs my-app-xxx -n my-namespace | tail -20Fix the readiness probe:
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 15 # give app time to start
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 5 # probe timeoutFix 4: keepalive Timeout Mismatch
NGINX reuses connections to backends (keepalive). If your backend closes connections faster than NGINX expects, you get timeouts:
annotations:
nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
nginx.ingress.kubernetes.io/upstream-keepalive-connections: "10"Or in the NGINX ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
upstream-keepalive-timeout: "60"
upstream-keepalive-connections: "10"
keep-alive: "75"Fix 5: Slow DNS Resolution
NGINX resolves backend service DNS on first request and caches it. If DNS is slow or stale, the first request times out:
# Check CoreDNS health
kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl logs -n kube-system -l k8s-app=kube-dns | tail -20
# Check NGINX resolver config
kubectl exec -n ingress-nginx <nginx-pod> -- cat /etc/nginx/nginx.conf | grep resolverAdd resolver to NGINX ConfigMap:
data:
use-forwarded-headers: "true"
compute-full-forwarded-for: "true"Fix 6: NGINX Worker Processes Saturated
If NGINX workers are handling too many concurrent requests, new requests queue and time out:
# Check NGINX pod resource usage
kubectl top pods -n ingress-nginx
# Check current connections
kubectl exec -n ingress-nginx <nginx-pod> -- \
curl localhost:10246/nginx_statusIf Active connections is consistently high, increase NGINX resources:
# In ingress-nginx HelmChart values
controller:
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 2000m
memory: 1Gi
config:
worker-processes: "4" # default is auto (1 per CPU)
worker-connections: "16384"Debugging Checklist
# 1. Check NGINX ingress controller logs
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --tail=50
# 2. Check backend pod logs
kubectl logs -n my-namespace -l app=my-app --tail=50
# 3. Check endpoints (are pods registered?)
kubectl get endpoints -n my-namespace my-service
# 4. Test direct service access (bypass NGINX)
kubectl run test --image=curlimages/curl --restart=Never -- \
curl -v --max-time 10 http://my-service.my-namespace:8080
# 5. Check current NGINX config
kubectl exec -n ingress-nginx <nginx-pod> -- nginx -T | grep timeoutQuick summary of fixes:
- Slow app → increase
proxy-read-timeoutannotation (most common) - Wrong port → fix service port in Ingress spec
- Unhealthy pods → fix readiness probe
- keepalive mismatch → set
upstream-keepalive-timeout - NGINX overloaded → increase resources and worker connections
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
AWS RDS Connection Timeout from EKS Pods — How to Fix It
EKS pods can't connect to RDS? Fix RDS connection timeouts from Kubernetes — covers security groups, VPC peering, subnet routing, and IAM auth issues.
Ingress-NGINX Is Being Retired: How to Migrate to Gateway API Before It Breaks
Ingress-NGINX is officially being retired. Your ingress rules will stop working. Here's the step-by-step migration plan to Kubernetes Gateway API before it's too late.
Kubernetes DNS Not Working: How to Fix CoreDNS Failures in Production
Pods can't resolve hostnames? Getting NXDOMAIN or 'no such host' errors? Here's how to diagnose and fix CoreDNS issues in Kubernetes step by step.