Kubernetes Network Policies Complete Guide — Zero Trust Networking in 2026
Complete guide to Kubernetes NetworkPolicies: default deny, ingress/egress rules, namespace isolation, CIDR blocks, and production patterns for zero-trust pod networking.
By default, every pod in a Kubernetes cluster can talk to every other pod. There are no firewalls, no access controls, no segmentation. A compromised pod in your staging namespace can reach your production database. NetworkPolicies fix this, and in 2026, they are a non-negotiable security requirement.
Here is everything you need to know to implement proper network segmentation in Kubernetes.
How NetworkPolicies Work
A NetworkPolicy is a Kubernetes resource that controls traffic flow at the pod level. It acts like a firewall rule for pods:
- Without any NetworkPolicy: all traffic is allowed (default allow)
- With a NetworkPolicy that selects a pod: only explicitly allowed traffic reaches that pod (default deny for selected pods)
NetworkPolicies are enforced by your CNI plugin. Not all CNIs support them:
| CNI | NetworkPolicy Support |
|---|---|
| Calico | Full support |
| Cilium | Full support (plus extended CiliumNetworkPolicy) |
| Weave Net | Full support |
| Flannel | No support |
| AWS VPC CNI | Requires Calico addon |
| Azure CNI | Requires Calico or Cilium |
If you are on a managed cluster, check your CNI. EKS with the default VPC CNI does not enforce NetworkPolicies unless you install Calico or Cilium alongside it.
Step 1: Default Deny All Traffic
The first rule of Kubernetes network security: start by denying everything, then allow what you need.
Deny All Ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {} # Selects ALL pods in the namespace
policyTypes:
- Ingress
# No ingress rules = deny all incoming trafficDeny All Egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
# No egress rules = deny all outgoing trafficDeny Both (Recommended Starting Point)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- EgressApply this to every namespace, then layer specific allow rules on top.
Step 2: Allow DNS (Critical)
After default-deny egress, your pods cannot resolve DNS. Nothing works. Always allow DNS first:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53This allows all pods in the namespace to reach CoreDNS in kube-system.
Step 3: Allow Specific Ingress Traffic
Now selectively open traffic paths. Here is a typical web application pattern:
Allow Ingress Controller to Reach Your App
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-web
namespace: production
spec:
podSelector:
matchLabels:
app: web-frontend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
ports:
- protocol: TCP
port: 8080Allow Frontend to Reach Backend API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: backend-api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web-frontend
ports:
- protocol: TCP
port: 3000Allow Backend to Reach Database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-to-database
namespace: production
spec:
podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: backend-api
ports:
- protocol: TCP
port: 5432Now your traffic flow is: Ingress -> Frontend -> API -> Database. Nothing else can reach the database directly.
Step 4: Namespace Isolation
Prevent cross-namespace traffic entirely:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-cross-namespace
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {} # Only allow from same namespaceTo allow specific cross-namespace traffic (monitoring scraping production metrics):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-prometheus-scrape
namespace: production
spec:
podSelector:
matchLabels:
app: backend-api
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- protocol: TCP
port: 9090Note the indentation: namespaceSelector and podSelector under the same from entry means AND (both must match). Separate entries mean OR.
Step 5: Egress to External Services
Allow your app to reach external APIs or databases outside the cluster:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-api
namespace: production
spec:
podSelector:
matchLabels:
app: backend-api
policyTypes:
- Egress
egress:
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
# Allow external API (specific CIDR)
- to:
- ipBlock:
cidr: 203.0.113.0/24 # External API IP range
ports:
- protocol: TCP
port: 443
# Allow AWS RDS (specific endpoint)
- to:
- ipBlock:
cidr: 10.0.100.0/24 # RDS subnet
ports:
- protocol: TCP
port: 5432Common Production Patterns
Pattern 1: Three-Tier Application
# Frontend: accepts traffic from ingress, sends to API
# API: accepts from frontend, sends to database and external APIs
# Database: accepts only from API
# Apply default deny to the namespace first, then:
# 1. allow-ingress-to-frontend
# 2. allow-frontend-to-api (ingress on API pods)
# 3. frontend-egress-to-api (egress on frontend pods)
# 4. allow-api-to-database (ingress on database pods)
# 5. api-egress-to-database (egress on API pods)
# 6. allow-dns (egress for all pods)Pattern 2: Monitoring Access
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring
namespace: production
spec:
podSelector: {} # All pods in namespace
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
ports:
- protocol: TCP
port: 9090 # Prometheus metrics
- protocol: TCP
port: 9091 # Additional metricsPattern 3: Allow Internal Cluster Traffic Only
Block all external traffic while allowing any pod-to-pod communication within the cluster:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-cluster-internal-only
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/8 # Adjust to your cluster CIDR
- ipBlock:
cidr: 172.16.0.0/12Testing NetworkPolicies
Never apply policies blindly. Test them:
# Deploy a test pod
kubectl run test-pod --image=busybox -n production -- sleep 3600
# Test connectivity to a specific service
kubectl exec test-pod -n production -- wget -qO- --timeout=3 http://backend-api:3000/health
# Test DNS resolution
kubectl exec test-pod -n production -- nslookup backend-api
# Test external connectivity
kubectl exec test-pod -n production -- wget -qO- --timeout=3 https://api.github.com
# Clean up
kubectl delete pod test-pod -n productionFor more comprehensive testing, use netpol or kubectl-np-viewer:
# Visualize all network policies in a namespace
kubectl get netpol -n production -o yamlCommon Mistakes
1. Forgetting DNS egress. After applying default-deny egress, every pod loses DNS resolution. Always allow UDP/TCP port 53 to kube-system.
2. AND vs OR confusion. In the from or to array:
- Same entry = AND:
namespaceSelectorANDpodSelectormust both match - Separate entries = OR: traffic matching either rule is allowed
# AND — must be from monitoring namespace AND prometheus pod
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: monitoring
podSelector:
matchLabels:
app: prometheus
# OR — from monitoring namespace OR any prometheus pod (in any namespace!)
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: monitoring
- podSelector:
matchLabels:
app: prometheusThis is the most common NetworkPolicy mistake. The indentation difference changes the meaning entirely.
3. Not checking CNI support. Flannel does not enforce NetworkPolicies. Your policies exist but do nothing. Verify with a connectivity test after applying.
4. Applying to kube-system. Be extremely careful with policies in kube-system. Breaking DNS or the API server has cluster-wide impact.
Debugging Policies
When traffic is blocked unexpectedly:
# 1. List all policies affecting a pod
kubectl get netpol -n production
# 2. Describe a specific policy
kubectl describe netpol allow-frontend-to-api -n production
# 3. Check pod labels match the policy selector
kubectl get pod my-pod -n production --show-labels
# 4. If using Calico, check denied connections
kubectl logs -n calico-system -l k8s-app=calico-node | grep -i deny
# 5. If using Cilium, use the Hubble CLI
hubble observe --namespace production --verdict DROPPEDWrapping Up
NetworkPolicies are the built-in firewall that most Kubernetes clusters are not using. The default allow-all model is a massive security gap. Start with default deny, allow DNS, then open only the paths your application actually needs.
The effort is minimal — a few YAML files per namespace — but the security improvement is substantial. A compromised pod in a namespace with proper NetworkPolicies can only reach what you explicitly allow.
For hands-on practice with Kubernetes networking, security, and CKA exam preparation, the labs at KodeKloud cover NetworkPolicies with practical scenarios. If you need a cluster with Calico or Cilium to test policies, DigitalOcean's managed Kubernetes comes with Cilium as the default CNI.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Cilium Complete Guide: eBPF-Powered Kubernetes Networking and Security in 2026
Master Cilium — the eBPF-based CNI that's become the default for Kubernetes networking. Covers installation, network policies, Hubble observability, and service mesh mode.
How to Set Up Istio Service Mesh from Scratch (2026)
Step-by-step guide to installing and configuring Istio service mesh on Kubernetes. Covers traffic management, mTLS, observability, canary deployments, and production best practices.
How to Set Up Istio Service Mesh on Kubernetes from Scratch in 2026
Step-by-step guide to installing and configuring Istio service mesh on Kubernetes — traffic management, mTLS, observability, and canary routing with practical examples.