All Articles

Kubernetes Service Not Routing Traffic to Pods — Every Cause and Fix (2026)

Your Service exists but traffic isn't reaching pods. Curl times out, 502s keep coming. Here's every reason a Kubernetes Service fails to route and how to fix each one.

DevOpsBoysApr 13, 20265 min read
Share:Tweet

Your Service is created, pods are running, but traffic never reaches them. Here's every reason a Kubernetes Service fails to route traffic — and the exact fix for each.

Quick Diagnosis

bash
# Step 1: Check Service exists and has correct port
kubectl get svc my-service -n <namespace>
 
# Step 2: Check Endpoints — this is the key command
kubectl get endpoints my-service -n <namespace>
 
# Step 3: Describe for events
kubectl describe svc my-service -n <namespace>
 
# Step 4: Test from inside the cluster
kubectl run test --image=busybox --rm -it --restart=Never -- wget -qO- http://my-service.<namespace>:80

If Endpoints shows <none> — your selector doesn't match any pods. That's the most common cause.


Cause 1: Selector Labels Don't Match Pod Labels

Symptom: kubectl get endpoints my-service shows <none>.

bash
# Check Service selector
kubectl get svc my-service -o jsonpath='{.spec.selector}'
# Output: {"app":"my-app"}
 
# Check Pod labels
kubectl get pods --show-labels -n <namespace>
# Look for app=my-app label

Common mismatches:

yaml
# Service selector
selector:
  app: my-app        # ← looking for this
 
# Pod labels (WRONG)
labels:
  app: myapp         # ← hyphen vs no hyphen
  app: my-App        # ← case sensitive
  App: my-app        # ← capital A

Fix: Make selector and pod labels exactly match.

bash
# Quick check — list pods matching the selector
kubectl get pods -l app=my-app -n <namespace>
# Should show your pods. If empty → label mismatch.

Cause 2: Pod Not in Ready State

Symptom: Endpoints shows pod IP but traffic still fails.

bash
kubectl get endpoints my-service -n <namespace>
# Shows IPs but they have "NotReadyAddresses" section
 
kubectl describe endpoints my-service -n <namespace>

Kubernetes only routes traffic to Ready pods. If readiness probe fails, pod is removed from Endpoints.

Fix — check readiness probe:

bash
kubectl describe pod <pod-name> -n <namespace>
# Look for: Readiness probe failed

Common readiness probe issues:

  • Wrong path (/health vs /healthz)
  • Wrong port
  • App not ready fast enough — increase initialDelaySeconds
  • App never becomes healthy — fix the app

Cause 3: Wrong targetPort

yaml
# Service
spec:
  ports:
  - port: 80
    targetPort: 8080   # ← must match container port
 
# Pod
containers:
- ports:
  - containerPort: 3000  # ← mismatch! app listens on 3000, not 8080

Fix: Match targetPort to the port your app actually listens on.

bash
# Check what port app is listening on inside container
kubectl exec -it <pod> -n <namespace> -- netstat -tlnp
# or
kubectl exec -it <pod> -n <namespace> -- ss -tlnp

Cause 4: NetworkPolicy Blocking Traffic

bash
# Check if NetworkPolicies exist
kubectl get networkpolicy -n <namespace>
 
# Describe to see rules
kubectl describe networkpolicy <name> -n <namespace>

A NetworkPolicy with no ingress rules blocks ALL traffic to matching pods.

Debug — test without NetworkPolicy temporarily:

bash
kubectl delete networkpolicy <name> -n <namespace>
# Test if traffic flows now
# If yes → NetworkPolicy is the issue

Fix — allow traffic in NetworkPolicy:

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-ingress
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: my-app
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: ingress-nginx   # allow from ingress
    ports:
    - port: 8080

Cause 5: Service Port Protocol Mismatch

yaml
# Service
ports:
- port: 80
  protocol: UDP    # ← wrong for HTTP
 
# Should be
  protocol: TCP    # ← default, for HTTP/HTTPS/gRPC

Fix: Remove protocol (defaults to TCP) or set it correctly.


Cause 6: ClusterIP Service Accessed from Outside Cluster

ClusterIP is only reachable inside the cluster. You can't curl a ClusterIP from your laptop.

bash
kubectl get svc my-service
# TYPE: ClusterIP  ← not accessible externally

Fix options:

  • kubectl port-forward svc/my-service 8080:80 — for local testing
  • Change type to NodePort or LoadBalancer — for external access
  • Use Ingress — for HTTP routing

Cause 7: kube-proxy Not Running

kube-proxy manages iptables rules that route Service traffic. If it's down, no Services work.

bash
kubectl get pods -n kube-system | grep kube-proxy
# Should show Running on every node
 
kubectl logs -n kube-system <kube-proxy-pod> --tail=50

Fix:

bash
kubectl rollout restart daemonset/kube-proxy -n kube-system

Cause 8: CoreDNS Not Resolving Service Name

Service DNS works via CoreDNS. If DNS is broken, service name resolution fails even when the Service works by IP.

bash
# Test DNS from inside a pod
kubectl run dnstest --image=busybox --rm -it --restart=Never -- nslookup my-service.production.svc.cluster.local
 
# Check CoreDNS pods
kubectl get pods -n kube-system | grep coredns
kubectl logs -n kube-system -l k8s-app=kube-dns --tail=50

Fix:

bash
kubectl rollout restart deployment/coredns -n kube-system

Cause 9: Namespace Mismatch in DNS

Service DNS format: <service>.<namespace>.svc.cluster.local

bash
# Wrong — different namespace
curl http://my-service/api          # only works in same namespace
 
# Right — cross-namespace
curl http://my-service.production.svc.cluster.local/api

Cause 10: Headless Service Misconfiguration

Headless Services (clusterIP: None) don't do load balancing — they return pod IPs directly via DNS. If your app expects a single IP (like JDBC connection string), headless won't work as expected.

bash
kubectl get svc my-service -o jsonpath='{.spec.clusterIP}'
# "None" = headless

Fix: Remove clusterIP: None if you need standard load-balanced routing.


Full Debug Flowchart

Service not working
        ↓
kubectl get endpoints → <none>?
  Yes → label selector mismatch → fix labels
  No  ↓
Pods Ready?
  No  → fix readiness probe
  Yes ↓
curl by ClusterIP works?
  No  → kube-proxy issue
  Yes ↓
curl by DNS name works?
  No  → CoreDNS issue
  Yes ↓
External traffic fails?
       → NetworkPolicy or wrong Service type

One-liner Debug Script

bash
NS=<namespace>; SVC=<service-name>
echo "=== Service ===" && kubectl get svc $SVC -n $NS
echo "=== Endpoints ===" && kubectl get endpoints $SVC -n $NS
echo "=== Pods matching selector ===" && kubectl get pods -n $NS -l $(kubectl get svc $SVC -n $NS -o jsonpath='{.spec.selector}' | jq -r 'to_entries|map("\(.key)=\(.value)")|join(",")')
echo "=== NetworkPolicies ===" && kubectl get networkpolicy -n $NS

Learn More

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments