Cilium and eBPF Networking — Complete Guide for DevOps Engineers (2026)
Everything you need to know about Cilium, the eBPF-powered CNI for Kubernetes. Covers architecture, installation, network policies, observability with Hubble, and replacing kube-proxy.
Cilium has gone from an experimental CNI plugin to the default networking choice for production Kubernetes clusters. AWS EKS, Google GKE, and Azure AKS all offer native Cilium support. It's the only CNI that's both a CNCF graduated project and backed by a major acquisition (Isovalent by Cisco).
If you're running Kubernetes in 2026 and not using Cilium, you're leaving performance, security, and observability on the table. This guide covers everything you need to know.
What Is Cilium?
Cilium is a Kubernetes CNI (Container Network Interface) plugin that uses eBPF (extended Berkeley Packet Filter) to provide networking, security, and observability — all at the Linux kernel level.
Traditional CNIs like Calico and Flannel use iptables rules for packet filtering and routing. Cilium replaces iptables entirely with eBPF programs that run directly in the kernel, resulting in:
- Better performance — eBPF processes packets faster than iptables chain traversal
- Lower latency — especially at scale (thousands of services)
- L7 visibility — Cilium can inspect HTTP, gRPC, Kafka, and DNS traffic without sidecars
- Identity-based security — policies based on pod identity, not IP addresses
Architecture Overview
┌─────────────────────────────────────────────┐
│ User Space │
│ ┌──────────┐ ┌──────────┐ ┌───────────┐ │
│ │ Cilium │ │ Hubble │ │ Cilium │ │
│ │ Agent │ │ (Observe) │ │ Operator │ │
│ └──────────┘ └──────────┘ └───────────┘ │
├─────────────────────────────────────────────┤
│ Kernel Space │
│ ┌──────────────────────────────────────┐ │
│ │ eBPF Programs │ │
│ │ ┌─────────┐ ┌──────────┐ ┌───────┐ │ │
│ │ │ Network │ │ Security │ │ L7 │ │ │
│ │ │ Routing │ │ Policies │ │ Parse │ │ │
│ │ └─────────┘ └──────────┘ └───────┘ │ │
│ └──────────────────────────────────────┘ │
└─────────────────────────────────────────────┘
Key components:
- Cilium Agent — runs as a DaemonSet on every node, manages eBPF programs
- Cilium Operator — handles cluster-wide operations (IPAM, CRD management)
- Hubble — observability layer that provides flow logs, service maps, and metrics
- eBPF Programs — compiled and loaded into the kernel for packet processing
Installing Cilium
Prerequisites
- Kubernetes 1.27+ cluster
- Linux kernel 5.10+ (for full eBPF feature support)
- No other CNI installed (or you'll need to migrate)
Using Helm (Recommended)
# Add Cilium Helm repo
helm repo add cilium https://helm.cilium.io/
helm repo update
# Install Cilium
helm install cilium cilium/cilium --version 1.16.5 \
--namespace kube-system \
--set kubeProxyReplacement=true \
--set k8sServiceHost=${API_SERVER_IP} \
--set k8sServicePort=${API_SERVER_PORT} \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=trueVerify Installation
# Install Cilium CLI
curl -L https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz | tar xz
sudo mv cilium /usr/local/bin/
# Check status
cilium status --waitExpected output:
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble Relay: OK
\__/¯¯\__/ ClusterMesh: disabled
\__/
Deployment cilium-operator Desired: 1, Ready: 1/1
DaemonSet cilium Desired: 3, Ready: 3/3
Deployment hubble-relay Desired: 1, Ready: 1/1
Run the connectivity test:
cilium connectivity testThis deploys test pods and validates DNS, L3/L4 connectivity, L7 policies, and encryption.
Replacing kube-proxy with Cilium
One of Cilium's biggest advantages is completely replacing kube-proxy. kube-proxy uses iptables rules for service load balancing — which gets slow with thousands of services (O(n) rule traversal). Cilium uses eBPF hash maps for O(1) lookups.
Performance Difference
| Metric | kube-proxy (iptables) | Cilium eBPF |
|---|---|---|
| Service lookup | O(n) rules | O(1) hash map |
| Rule update (1000 services) | ~5 seconds | ~50ms |
| Connection tracking | conntrack table | eBPF map |
| Latency overhead | ~0.5ms per hop | ~0.1ms per hop |
Enable kube-proxy Replacement
If you installed Cilium with kubeProxyReplacement=true, kube-proxy is already replaced. Remove the kube-proxy DaemonSet:
kubectl -n kube-system delete daemonset kube-proxy
, or disable it
kubectl -n kube-system patch daemonset kube-proxy -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'On managed clusters like EKS:
# EKS: Install Cilium without kube-proxy from the start
eksctl create cluster --name my-cluster --without-nodegroup
# Then install Cilium with kubeProxyReplacement=true before adding nodesNetwork Policies — Identity-Based Security
Cilium's network policies are far more powerful than standard Kubernetes NetworkPolicies. They operate on pod identity (labels), not IP addresses, and support L7 filtering.
Standard L3/L4 Policy
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-frontend-to-api
namespace: production
spec:
endpointSelector:
matchLabels:
app: api-server
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "8080"
protocol: TCPL7 HTTP Policy (No Sidecar Needed!)
This is where Cilium shines. You can filter by HTTP method, path, and headers — without deploying a service mesh:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-l7-policy
namespace: production
spec:
endpointSelector:
matchLabels:
app: api-server
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: GET
path: "/api/v1/products"
- method: POST
path: "/api/v1/orders"This allows the frontend to only GET /api/v1/products and POST /api/v1/orders — any other request is denied. No Istio, no Envoy sidecar, no additional resource overhead.
DNS-Based Policy
Allow pods to only reach specific external domains:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: external-access
spec:
endpointSelector:
matchLabels:
app: payment-service
egress:
- toFQDNs:
- matchName: "api.stripe.com"
- matchName: "api.paypal.com"
toPorts:
- ports:
- port: "443"
protocol: TCP
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDPHubble — Deep Observability Without Sidecars
Hubble is Cilium's observability platform. It gives you:
- Flow logs — every packet between pods, with L7 metadata
- Service dependency maps — visual graph of service communication
- Metrics — Prometheus-compatible metrics for network flows
- DNS visibility — every DNS query and response
Using Hubble CLI
# Install Hubble CLI
export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz | tar xz
sudo mv hubble /usr/local/bin/
# Port-forward Hubble Relay
cilium hubble port-forward &
# Observe all flows
hubble observe
# Filter by namespace
hubble observe --namespace production
# Filter by pod
hubble observe --to-pod production/api-server
# Show only dropped packets (policy violations)
hubble observe --verdict DROPPED
# Show HTTP flows with status codes
hubble observe --protocol http -o json | jq '.flow.l7.http'Hubble UI
Access the visual service map:
cilium hubble uiThis opens a browser with a real-time dependency graph showing:
- Which services talk to each other
- Request rates and latencies
- Error rates per connection
- Policy verdicts (allowed/denied)
Hubble Prometheus Metrics
# In Cilium Helm values
hubble:
metrics:
enabled:
- dns
- drop
- tcp
- flow
- icmp
- http
serviceMonitor:
enabled: trueKey metrics:
hubble_flows_processed_total— total flowshubble_drop_total{reason="POLICY_DENIED"}— policy dropshubble_http_requests_total{method="GET",status="200"}— HTTP metrics per endpointhubble_dns_queries_total— DNS query volume
Transparent Encryption with WireGuard
Cilium can encrypt all pod-to-pod traffic using WireGuard — no application changes, no certificates to manage:
# In Cilium Helm values
encryption:
enabled: true
type: wireguard
nodeEncryption: true # Also encrypts node-to-node trafficThat's it. All inter-pod traffic is now encrypted at the kernel level with WireGuard. No mTLS certificates, no sidecar proxies, no performance degradation (WireGuard adds ~5% overhead vs ~15% for Istio mTLS).
Cilium vs Traditional CNIs
| Feature | Calico | Flannel | Cilium |
|---|---|---|---|
| eBPF-native | Partial | No | Yes |
| kube-proxy replacement | No | No | Yes |
| L7 policies (no sidecar) | No | No | Yes |
| Built-in observability | No | No | Hubble |
| Transparent encryption | IPsec | No | WireGuard |
| DNS-based policies | No | No | Yes |
| Bandwidth management | No | No | Yes (EDT) |
| Multi-cluster (ClusterMesh) | Yes | No | Yes |
Production Checklist
Before going to production with Cilium:
# 1. Verify kernel version (5.10+ recommended)
uname -r
# 2. Run connectivity tests
cilium connectivity test
# 3. Enable Hubble metrics for monitoring
# (see Helm values above)
# 4. Set up default-deny network policy
cat <<EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: default-deny
spec:
endpointSelector: {}
ingress:
- fromEntities:
- cluster
egress:
- toEntities:
- cluster
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
EOF
# 5. Enable WireGuard encryption
# (see Helm values above)
# 6. Monitor Cilium agent health
kubectl -n kube-system exec ds/cilium -- cilium status --verboseLearn More
Cilium's power comes from eBPF, and understanding eBPF fundamentals will make you a stronger Kubernetes networking engineer. For hands-on practice with Kubernetes networking and Cilium, check out KodeKloud's Kubernetes courses — they offer real lab environments where you can experiment safely.
If you need a Kubernetes cluster to practice on, DigitalOcean's managed Kubernetes is one of the most affordable options and supports custom CNI installations including Cilium.
Cilium isn't just a CNI — it's a platform. The earlier you adopt it, the more you'll benefit from its networking, security, and observability features.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
eBPF Will Make Traditional Service Meshes Obsolete — Here's Why
Istio and Linkerd are powerful but heavy. eBPF-based networking is changing the game. Here's why I think the sidecar proxy era is ending.
What is a Service Mesh? Explained Simply (No Jargon)
Service mesh sounds complicated but the concept is simple. Here's what it actually does, why teams use it, and whether you need one — explained without the buzzwords.
Agentic Networking — How Kubernetes Is Adapting for AI Agent Traffic in 2026
AI agents are the next-gen microservices, but with unpredictable communication patterns. Learn how Kubernetes networking, Gateway API, Cilium, and eBPF are adapting for agentic traffic in 2026.