What is CNI in Kubernetes? Container Network Interface Explained Simply
CNI is why your pods can talk to each other — but most engineers don't know how it works. Here's a plain-English explanation of CNI, plugins, and when it matters for you.
You create a pod. It gets an IP address. It can talk to other pods. You've never set up any networking. How does that work?
The answer is CNI — Container Network Interface. Here's how it works and what you need to know as a DevOps engineer.
The Problem CNI Solves
Kubernetes doesn't ship with built-in pod networking. It defines a spec:
"Every pod must get a unique IP address. Pods must be able to communicate with each other across nodes without NAT."
But it doesn't implement this. That's CNI's job.
CNI is a standard interface — a contract — that says: "If you build a network plugin that follows these rules, Kubernetes will use it."
How CNI Works (Step by Step)
When Kubernetes creates a pod, here's what happens:
1. Kubernetes asks the container runtime (containerd) to create a pod
2. containerd creates a network namespace for the pod
3. containerd calls the CNI plugin: "Set up networking for this namespace"
4. CNI plugin:
a. Creates a virtual ethernet pair (veth pair)
b. Puts one end in the pod's network namespace
c. Puts the other end on the node's root namespace
d. Assigns an IP address from the pod CIDR
e. Sets up routing so pod can reach other pods
5. Pod is running with a unique IP address
This all happens in milliseconds, invisibly.
The Veth Pair — The Core Mechanism
Every pod gets a virtual ethernet pair:
Pod namespace: Node namespace:
eth0 ←──────────────── vethXXXXXX
10.244.1.5 (connects to bridge)
eth0 inside the pod looks like a real network card. It's actually a software tunnel to the node.
# See the veth pairs on a node
ip link show | grep veth
# See which veth belongs to which pod
# First get pod's PID
kubectl exec my-pod -- cat /proc/1/net/fib_trie | grep -A1 "LOCAL"Popular CNI Plugins and Their Differences
Flannel
The simplest CNI. Uses VXLAN overlay network — wraps pod traffic in UDP packets.
- Good for: Simple clusters, getting started
- Bad for: Performance-sensitive workloads (overlay overhead), no NetworkPolicy support
Calico
Most popular in production. Uses BGP routing (no overlay in most modes).
- Good for: Performance, NetworkPolicy enforcement, large clusters
- Bad for: Complex BGP configuration in some environments
Cilium
eBPF-based — implements networking at the Linux kernel level, no iptables.
- Good for: Maximum performance, observability (Hubble UI), L7 NetworkPolicies
- Bad for: Requires Linux kernel 5.4+ (most modern nodes have this)
Weave Net
Simple overlay, easy setup, automatic mesh between nodes.
- Good for: Multi-cloud, edge clusters
- Bad for: Higher overhead than Calico/Cilium
AWS VPC CNI (EKS)
Uses real AWS VPC IPs for pods — pods get ENI IPs directly.
- Good for: EKS clusters, direct VPC integration, no overlay
- Bad for: Limited by ENI IP limits per instance type (can run out of pod IPs)
CNI and Network Policies
NetworkPolicies in Kubernetes are only enforced if your CNI supports them.
Flannel — does NOT enforce NetworkPolicies. You can create them, nothing happens. Calico, Cilium, Weave — enforce NetworkPolicies.
This matters when you run this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: production
spec:
podSelector: {} # Applies to all pods
policyTypes:
- Ingress
- EgressWith Flannel: This does nothing. All pods still talk to each other. With Calico/Cilium: This blocks all traffic in the production namespace.
# Check which CNI you're running
kubectl get pods -n kube-system | grep -E "calico|cilium|flannel|weave|aws-node"
# On EKS specifically
kubectl get daemonset -n kube-system aws-nodePod CIDR vs Node CIDR vs Service CIDR
These three IP ranges coexist in every Kubernetes cluster:
Pod CIDR: 10.244.0.0/16 — IPs assigned to pods
├── Node 1: 10.244.1.0/24
├── Node 2: 10.244.2.0/24
└── Node 3: 10.244.3.0/24
Node CIDR: 192.168.1.0/24 — IPs of the actual nodes (VMs)
Service CIDR: 10.96.0.0/12 — Virtual IPs for Services
(not real IPs — only exist in iptables/eBPF rules)
The CNI owns the Pod CIDR. Each node gets a slice of it, and the CNI assigns IPs from that slice to pods on the node.
Why EKS CNI Is Different
On EKS, the aws-node DaemonSet (AWS VPC CNI) gives pods real VPC IP addresses from your subnet:
VPC Subnet: 10.0.1.0/24
EC2 node: 10.0.1.10
Pod 1: 10.0.1.47 ← real VPC IP, assigned to secondary ENI
Pod 2: 10.0.1.52 ← real VPC IP
This means pods can communicate directly with other AWS services (RDS, ElastiCache) without extra NAT. But it also means you can run out of IPs — a t3.medium supports max 17 pods because it only has 3 ENIs × 6 IPs each.
# Check how many IPs your node type supports
aws ec2 describe-instance-types \
--instance-types t3.medium \
--query 'InstanceTypes[].NetworkInfo.{ENIs:MaximumNetworkInterfaces,IPs:Ipv4AddressesPerInterface}'What DevOps Engineers Actually Need to Know
You don't need to implement CNI. But you need to:
- Know which CNI is running — affects NetworkPolicy support
- Understand IP exhaustion on EKS — pick right instance type, enable prefix delegation
- Know Cilium/Calico for CKA exam — it's tested
- Debug pod networking issues — know that CNI controls pod IP assignment
# Pod can't reach another pod? Debug CNI:
kubectl exec pod-a -- ping 10.244.2.5 # Can pod-a reach pod-b's IP?
kubectl exec pod-a -- nslookup pod-b-service # DNS working?
kubectl get networkpolicy -A # Any NetworkPolicies blocking traffic?
kubectl describe networkpolicy -n <namespace> # What does the policy allow?CNI is one of those foundational Kubernetes concepts that's invisible until something breaks. Understanding it saves hours during networking incidents.
For CKA exam prep including hands-on networking labs, KodeKloud is the most thorough resource available.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
How to Migrate from Ingress-NGINX to Kubernetes Gateway API in 2026
Step-by-step guide to migrating from Ingress-NGINX to Kubernetes Gateway API. Includes YAML examples, implementation choices, testing strategy, and cutover plan.
How to Set Up Kubernetes Gateway API to Replace Ingress (2026 Guide)
The Kubernetes Ingress API is being replaced by the Gateway API. Here's a complete step-by-step guide to setting it up with Nginx Gateway Fabric and migrating from Ingress.
What is a Service Mesh? Explained Simply (No Jargon)
Service mesh sounds complicated but the concept is simple. Here's what it actually does, why teams use it, and whether you need one — explained without the buzzwords.