Cilium Complete Guide: eBPF-Powered Kubernetes Networking and Security in 2026
Master Cilium — the eBPF-based CNI that's become the default for Kubernetes networking. Covers installation, network policies, Hubble observability, and service mesh mode.
Cilium has quietly become the most important networking project in the Kubernetes ecosystem. Google chose it as the default CNI for GKE. AWS integrated it into EKS. And it's the only CNI that can replace your service mesh, your network policies, and your observability stack — all with zero sidecars.
The secret? eBPF. Cilium runs tiny programs directly in the Linux kernel, bypassing iptables entirely. The result is faster networking, deeper visibility, and security enforcement that traditional CNIs simply can't match.
This guide covers everything you need to know to use Cilium in production.
Why Cilium Over Traditional CNIs
Traditional CNIs like Calico or Flannel rely on iptables for packet filtering. This works but creates problems at scale:
- iptables rules grow linearly with services — 10,000 services means 10,000+ rules
- No L7 visibility — iptables operates at L3/L4, so you can't see HTTP paths or gRPC methods
- Sidecar overhead — you need Istio or Linkerd sidecars for L7 features, adding latency and resource consumption
Cilium replaces all of this with eBPF programs that run in the kernel. These programs:
- Process packets without iptables chains
- Inspect L7 protocols (HTTP, gRPC, Kafka, DNS) natively
- Provide observability without sidecars
- Scale to thousands of services without performance degradation
Installing Cilium
Using Helm (Recommended)
helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium \
--namespace kube-system \
--set kubeProxyReplacement=true \
--set k8sServiceHost=YOUR_API_SERVER_IP \
--set k8sServicePort=6443 \
--set hubble.enabled=true \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=trueKey flags:
kubeProxyReplacement=true— replaces kube-proxy entirely with eBPF (faster service routing)hubble.enabled=true— enables the observability layer
Verify Installation
cilium status --waitYou should see:
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble Relay: OK
\__/¯¯\__/ ClusterMesh: disabled
\__/
Cluster Pods: 12/12 managed by Cilium
Run the connectivity test:
cilium connectivity testThis runs ~40 tests to verify DNS, L3/L4 policies, L7 policies, and encryption are all working.
Cilium Network Policies
Cilium supports standard Kubernetes NetworkPolicy objects, but also provides CiliumNetworkPolicy (CNP) for advanced features.
Basic L3/L4 Policy
Allow traffic from frontend to backend on port 8080:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-frontend-to-backend
spec:
endpointSelector:
matchLabels:
app: backend
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "8080"
protocol: TCPL7 HTTP Policy
This is where Cilium shines. Restrict traffic to specific HTTP paths and methods:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-http-policy
spec:
endpointSelector:
matchLabels:
app: api-server
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: GET
path: "/api/v1/products"
- method: POST
path: "/api/v1/orders"This allows the frontend to GET products and POST orders — but nothing else. No DELETE, no admin endpoints. All enforced at the kernel level without a sidecar.
DNS-Based Policy
Control which external domains pods can access:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: restrict-external-access
spec:
endpointSelector:
matchLabels:
app: payment-service
egress:
- toFQDNs:
- matchName: "api.stripe.com"
- matchName: "api.paypal.com"
toPorts:
- ports:
- port: "443"
protocol: TCP
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDPThe payment service can only reach Stripe and PayPal. Everything else is blocked. This is massive for compliance and security.
Hubble — Observability Without Sidecars
Hubble is Cilium's built-in observability layer. It captures every network flow in the cluster using eBPF — no agents, no sidecars, no sampling.
Install the Hubble CLI
HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz
tar xzvf hubble-linux-amd64.tar.gz
sudo mv hubble /usr/local/bin/Observe Network Flows
# Port forward to Hubble Relay
cilium hubble port-forward &
# Watch all flows
hubble observe
# Filter by namespace
hubble observe --namespace production
# Filter by pod
hubble observe --pod production/api-server
# Filter by verdict (dropped traffic)
hubble observe --verdict DROPPED
# Filter by HTTP status
hubble observe --http-status-code 500Hubble UI
Access the Hubble UI to see a visual service map:
cilium hubble uiThis opens a browser with a real-time dependency graph showing which services talk to each other, latency between them, and error rates — all without touching your application code.
Cilium Service Mesh (Sidecar-Free)
Cilium can replace Istio or Linkerd with a sidecar-free service mesh. Instead of injecting proxy containers into every pod, Cilium handles L7 traffic management in the kernel.
Enable Service Mesh
helm upgrade cilium cilium/cilium \
--namespace kube-system \
--set ingressController.enabled=true \
--set envoyConfig.enabled=trueIngress Example
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
annotations:
ingress.cilium.io/loadbalancer-mode: shared
spec:
ingressClassName: cilium
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-server
port:
number: 8080Traffic Splitting (Canary)
apiVersion: cilium.io/v2
kind: CiliumEnvoyConfig
metadata:
name: canary-split
spec:
services:
- name: api-server
namespace: production
resources:
- "@type": type.googleapis.com/envoy.config.route.v3.RouteConfiguration
virtualHosts:
- name: api-server
domains: ["*"]
routes:
- match:
prefix: "/"
route:
weightedClusters:
clusters:
- name: "production/api-server-stable"
weight: 90
- name: "production/api-server-canary"
weight: 1090% of traffic goes to stable, 10% to canary — no Istio required.
Tetragon — Runtime Security
Tetragon is Cilium's runtime security component. It uses eBPF to monitor system calls, file access, and process execution at the kernel level.
Install Tetragon
helm install tetragon cilium/tetragon \
--namespace kube-systemDetect Sensitive File Access
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: detect-sensitive-files
spec:
kprobes:
- call: "fd_install"
syscall: false
args:
- index: 1
type: "file"
selectors:
- matchArgs:
- index: 1
operator: "Prefix"
values:
- "/etc/shadow"
- "/etc/passwd"
- "/root/.ssh"This alerts whenever any container tries to read /etc/shadow, /etc/passwd, or SSH keys. Real-time, zero overhead, no agent required.
Performance: Cilium vs iptables
Benchmarks from real production clusters:
| Metric | iptables (Calico) | Cilium (eBPF) |
|---|---|---|
| Latency (P99) | 1.2ms | 0.4ms |
| Throughput | 8 Gbps | 12 Gbps |
| CPU per node | 15% | 8% |
| Rules at 5000 services | 25,000+ iptables rules | O(1) eBPF maps |
| L7 visibility | Requires sidecar | Native |
The performance gap widens as cluster size grows because eBPF map lookups are O(1) while iptables rule chains are O(n).
Production Checklist
Before running Cilium in production:
- Linux kernel 5.10+ (5.15+ recommended for full features)
- kube-proxy replacement enabled for best performance
- Hubble enabled with appropriate retention
- CiliumNetworkPolicies for critical namespaces
- Monitoring CoreDNS through Cilium's DNS proxy
- Tetragon for runtime security in sensitive workloads
- Cilium connectivity test passing
Wrapping Up
Cilium is more than a CNI. It's a networking, observability, security, and service mesh platform — all powered by eBPF and running in the kernel. If you're starting a new cluster in 2026, Cilium should be your default choice.
The learning curve is real, but the payoff is massive: fewer components, better performance, and deeper visibility than any traditional networking stack.
Want to master Kubernetes networking from the ground up? KodeKloud's Kubernetes courses cover CNI plugins, network policies, and hands-on labs for production networking scenarios. For testing Cilium yourself, spin up a cluster on DigitalOcean Kubernetes — it supports custom CNI installations.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
How to Set Up Istio Service Mesh from Scratch (2026)
Step-by-step guide to installing and configuring Istio service mesh on Kubernetes. Covers traffic management, mTLS, observability, canary deployments, and production best practices.
How to Set Up Istio Service Mesh on Kubernetes from Scratch in 2026
Step-by-step guide to installing and configuring Istio service mesh on Kubernetes — traffic management, mTLS, observability, and canary routing with practical examples.
How to Set Up Tailscale for Zero-Trust Access to Your DevOps Infrastructure
Step-by-step guide to setting up Tailscale for secure access to Kubernetes clusters, databases, and internal tools without traditional VPNs.