Kubernetes Service Not Routing Traffic to Pods — Every Cause and Fix (2026)
Your Service exists but traffic isn't reaching pods. Curl times out, 502s keep coming. Here's every reason a Kubernetes Service fails to route and how to fix each one.
Your Service is created, pods are running, but traffic never reaches them. Here's every reason a Kubernetes Service fails to route traffic — and the exact fix for each.
Quick Diagnosis
# Step 1: Check Service exists and has correct port
kubectl get svc my-service -n <namespace>
# Step 2: Check Endpoints — this is the key command
kubectl get endpoints my-service -n <namespace>
# Step 3: Describe for events
kubectl describe svc my-service -n <namespace>
# Step 4: Test from inside the cluster
kubectl run test --image=busybox --rm -it --restart=Never -- wget -qO- http://my-service.<namespace>:80If Endpoints shows <none> — your selector doesn't match any pods. That's the most common cause.
Cause 1: Selector Labels Don't Match Pod Labels
Symptom: kubectl get endpoints my-service shows <none>.
# Check Service selector
kubectl get svc my-service -o jsonpath='{.spec.selector}'
# Output: {"app":"my-app"}
# Check Pod labels
kubectl get pods --show-labels -n <namespace>
# Look for app=my-app labelCommon mismatches:
# Service selector
selector:
app: my-app # ← looking for this
# Pod labels (WRONG)
labels:
app: myapp # ← hyphen vs no hyphen
app: my-App # ← case sensitive
App: my-app # ← capital AFix: Make selector and pod labels exactly match.
# Quick check — list pods matching the selector
kubectl get pods -l app=my-app -n <namespace>
# Should show your pods. If empty → label mismatch.Cause 2: Pod Not in Ready State
Symptom: Endpoints shows pod IP but traffic still fails.
kubectl get endpoints my-service -n <namespace>
# Shows IPs but they have "NotReadyAddresses" section
kubectl describe endpoints my-service -n <namespace>Kubernetes only routes traffic to Ready pods. If readiness probe fails, pod is removed from Endpoints.
Fix — check readiness probe:
kubectl describe pod <pod-name> -n <namespace>
# Look for: Readiness probe failedCommon readiness probe issues:
- Wrong path (
/healthvs/healthz) - Wrong port
- App not ready fast enough — increase
initialDelaySeconds - App never becomes healthy — fix the app
Cause 3: Wrong targetPort
# Service
spec:
ports:
- port: 80
targetPort: 8080 # ← must match container port
# Pod
containers:
- ports:
- containerPort: 3000 # ← mismatch! app listens on 3000, not 8080Fix: Match targetPort to the port your app actually listens on.
# Check what port app is listening on inside container
kubectl exec -it <pod> -n <namespace> -- netstat -tlnp
# or
kubectl exec -it <pod> -n <namespace> -- ss -tlnpCause 4: NetworkPolicy Blocking Traffic
# Check if NetworkPolicies exist
kubectl get networkpolicy -n <namespace>
# Describe to see rules
kubectl describe networkpolicy <name> -n <namespace>A NetworkPolicy with no ingress rules blocks ALL traffic to matching pods.
Debug — test without NetworkPolicy temporarily:
kubectl delete networkpolicy <name> -n <namespace>
# Test if traffic flows now
# If yes → NetworkPolicy is the issueFix — allow traffic in NetworkPolicy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress
namespace: production
spec:
podSelector:
matchLabels:
app: my-app
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx # allow from ingress
ports:
- port: 8080Cause 5: Service Port Protocol Mismatch
# Service
ports:
- port: 80
protocol: UDP # ← wrong for HTTP
# Should be
protocol: TCP # ← default, for HTTP/HTTPS/gRPCFix: Remove protocol (defaults to TCP) or set it correctly.
Cause 6: ClusterIP Service Accessed from Outside Cluster
ClusterIP is only reachable inside the cluster. You can't curl a ClusterIP from your laptop.
kubectl get svc my-service
# TYPE: ClusterIP ← not accessible externallyFix options:
kubectl port-forward svc/my-service 8080:80— for local testing- Change type to
NodePortorLoadBalancer— for external access - Use
Ingress— for HTTP routing
Cause 7: kube-proxy Not Running
kube-proxy manages iptables rules that route Service traffic. If it's down, no Services work.
kubectl get pods -n kube-system | grep kube-proxy
# Should show Running on every node
kubectl logs -n kube-system <kube-proxy-pod> --tail=50Fix:
kubectl rollout restart daemonset/kube-proxy -n kube-systemCause 8: CoreDNS Not Resolving Service Name
Service DNS works via CoreDNS. If DNS is broken, service name resolution fails even when the Service works by IP.
# Test DNS from inside a pod
kubectl run dnstest --image=busybox --rm -it --restart=Never -- nslookup my-service.production.svc.cluster.local
# Check CoreDNS pods
kubectl get pods -n kube-system | grep coredns
kubectl logs -n kube-system -l k8s-app=kube-dns --tail=50Fix:
kubectl rollout restart deployment/coredns -n kube-systemCause 9: Namespace Mismatch in DNS
Service DNS format: <service>.<namespace>.svc.cluster.local
# Wrong — different namespace
curl http://my-service/api # only works in same namespace
# Right — cross-namespace
curl http://my-service.production.svc.cluster.local/apiCause 10: Headless Service Misconfiguration
Headless Services (clusterIP: None) don't do load balancing — they return pod IPs directly via DNS. If your app expects a single IP (like JDBC connection string), headless won't work as expected.
kubectl get svc my-service -o jsonpath='{.spec.clusterIP}'
# "None" = headlessFix: Remove clusterIP: None if you need standard load-balanced routing.
Full Debug Flowchart
Service not working
↓
kubectl get endpoints → <none>?
Yes → label selector mismatch → fix labels
No ↓
Pods Ready?
No → fix readiness probe
Yes ↓
curl by ClusterIP works?
No → kube-proxy issue
Yes ↓
curl by DNS name works?
No → CoreDNS issue
Yes ↓
External traffic fails?
→ NetworkPolicy or wrong Service type
One-liner Debug Script
NS=<namespace>; SVC=<service-name>
echo "=== Service ===" && kubectl get svc $SVC -n $NS
echo "=== Endpoints ===" && kubectl get endpoints $SVC -n $NS
echo "=== Pods matching selector ===" && kubectl get pods -n $NS -l $(kubectl get svc $SVC -n $NS -o jsonpath='{.spec.selector}' | jq -r 'to_entries|map("\(.key)=\(.value)")|join(",")')
echo "=== NetworkPolicies ===" && kubectl get networkpolicy -n $NSLearn More
- KodeKloud Kubernetes Networking Labs — hands-on Service and NetworkPolicy debugging
- CKA on Udemy — Services and networking is a major CKA exam topic
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Ingress-NGINX Is Being Retired: How to Migrate to Gateway API Before It Breaks
Ingress-NGINX is officially being retired. Your ingress rules will stop working. Here's the step-by-step migration plan to Kubernetes Gateway API before it's too late.
Kubernetes DNS Not Working: How to Fix CoreDNS Failures in Production
Pods can't resolve hostnames? Getting NXDOMAIN or 'no such host' errors? Here's how to diagnose and fix CoreDNS issues in Kubernetes step by step.
Kubernetes DNS Resolution Failures — How to Fix CoreDNS Issues
Fix Kubernetes DNS resolution failures caused by CoreDNS misconfigurations, ndots issues, and pod DNS policies. Real troubleshooting scenarios with step-by-step solutions.