All Articles

Cilium Complete Guide: eBPF-Powered Kubernetes Networking and Security in 2026

Master Cilium — the eBPF-based CNI that's become the default for Kubernetes networking. Covers installation, network policies, Hubble observability, and service mesh mode.

DevOpsBoysMar 18, 20265 min read
Share:Tweet

Cilium has quietly become the most important networking project in the Kubernetes ecosystem. Google chose it as the default CNI for GKE. AWS integrated it into EKS. And it's the only CNI that can replace your service mesh, your network policies, and your observability stack — all with zero sidecars.

The secret? eBPF. Cilium runs tiny programs directly in the Linux kernel, bypassing iptables entirely. The result is faster networking, deeper visibility, and security enforcement that traditional CNIs simply can't match.

This guide covers everything you need to know to use Cilium in production.

Why Cilium Over Traditional CNIs

Traditional CNIs like Calico or Flannel rely on iptables for packet filtering. This works but creates problems at scale:

  • iptables rules grow linearly with services — 10,000 services means 10,000+ rules
  • No L7 visibility — iptables operates at L3/L4, so you can't see HTTP paths or gRPC methods
  • Sidecar overhead — you need Istio or Linkerd sidecars for L7 features, adding latency and resource consumption

Cilium replaces all of this with eBPF programs that run in the kernel. These programs:

  • Process packets without iptables chains
  • Inspect L7 protocols (HTTP, gRPC, Kafka, DNS) natively
  • Provide observability without sidecars
  • Scale to thousands of services without performance degradation

Installing Cilium

bash
helm repo add cilium https://helm.cilium.io/
helm repo update
 
helm install cilium cilium/cilium \
  --namespace kube-system \
  --set kubeProxyReplacement=true \
  --set k8sServiceHost=YOUR_API_SERVER_IP \
  --set k8sServicePort=6443 \
  --set hubble.enabled=true \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true

Key flags:

  • kubeProxyReplacement=true — replaces kube-proxy entirely with eBPF (faster service routing)
  • hubble.enabled=true — enables the observability layer

Verify Installation

bash
cilium status --wait

You should see:

    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Hubble Relay:       OK
 \__/¯¯\__/    ClusterMesh:        disabled
    \__/
Cluster Pods:       12/12 managed by Cilium

Run the connectivity test:

bash
cilium connectivity test

This runs ~40 tests to verify DNS, L3/L4 policies, L7 policies, and encryption are all working.

Cilium Network Policies

Cilium supports standard Kubernetes NetworkPolicy objects, but also provides CiliumNetworkPolicy (CNP) for advanced features.

Basic L3/L4 Policy

Allow traffic from frontend to backend on port 8080:

yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-frontend-to-backend
spec:
  endpointSelector:
    matchLabels:
      app: backend
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: frontend
    toPorts:
    - ports:
      - port: "8080"
        protocol: TCP

L7 HTTP Policy

This is where Cilium shines. Restrict traffic to specific HTTP paths and methods:

yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: api-http-policy
spec:
  endpointSelector:
    matchLabels:
      app: api-server
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: frontend
    toPorts:
    - ports:
      - port: "8080"
        protocol: TCP
      rules:
        http:
        - method: GET
          path: "/api/v1/products"
        - method: POST
          path: "/api/v1/orders"

This allows the frontend to GET products and POST orders — but nothing else. No DELETE, no admin endpoints. All enforced at the kernel level without a sidecar.

DNS-Based Policy

Control which external domains pods can access:

yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: restrict-external-access
spec:
  endpointSelector:
    matchLabels:
      app: payment-service
  egress:
  - toFQDNs:
    - matchName: "api.stripe.com"
    - matchName: "api.paypal.com"
    toPorts:
    - ports:
      - port: "443"
        protocol: TCP
  - toEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s-app: kube-dns
    toPorts:
    - ports:
      - port: "53"
        protocol: UDP

The payment service can only reach Stripe and PayPal. Everything else is blocked. This is massive for compliance and security.

Hubble — Observability Without Sidecars

Hubble is Cilium's built-in observability layer. It captures every network flow in the cluster using eBPF — no agents, no sidecars, no sampling.

Install the Hubble CLI

bash
HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz
tar xzvf hubble-linux-amd64.tar.gz
sudo mv hubble /usr/local/bin/

Observe Network Flows

bash
# Port forward to Hubble Relay
cilium hubble port-forward &
 
# Watch all flows
hubble observe
 
# Filter by namespace
hubble observe --namespace production
 
# Filter by pod
hubble observe --pod production/api-server
 
# Filter by verdict (dropped traffic)
hubble observe --verdict DROPPED
 
# Filter by HTTP status
hubble observe --http-status-code 500

Hubble UI

Access the Hubble UI to see a visual service map:

bash
cilium hubble ui

This opens a browser with a real-time dependency graph showing which services talk to each other, latency between them, and error rates — all without touching your application code.

Cilium Service Mesh (Sidecar-Free)

Cilium can replace Istio or Linkerd with a sidecar-free service mesh. Instead of injecting proxy containers into every pod, Cilium handles L7 traffic management in the kernel.

Enable Service Mesh

bash
helm upgrade cilium cilium/cilium \
  --namespace kube-system \
  --set ingressController.enabled=true \
  --set envoyConfig.enabled=true

Ingress Example

yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  annotations:
    ingress.cilium.io/loadbalancer-mode: shared
spec:
  ingressClassName: cilium
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: api-server
            port:
              number: 8080

Traffic Splitting (Canary)

yaml
apiVersion: cilium.io/v2
kind: CiliumEnvoyConfig
metadata:
  name: canary-split
spec:
  services:
  - name: api-server
    namespace: production
  resources:
  - "@type": type.googleapis.com/envoy.config.route.v3.RouteConfiguration
    virtualHosts:
    - name: api-server
      domains: ["*"]
      routes:
      - match:
          prefix: "/"
        route:
          weightedClusters:
            clusters:
            - name: "production/api-server-stable"
              weight: 90
            - name: "production/api-server-canary"
              weight: 10

90% of traffic goes to stable, 10% to canary — no Istio required.

Tetragon — Runtime Security

Tetragon is Cilium's runtime security component. It uses eBPF to monitor system calls, file access, and process execution at the kernel level.

Install Tetragon

bash
helm install tetragon cilium/tetragon \
  --namespace kube-system

Detect Sensitive File Access

yaml
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: detect-sensitive-files
spec:
  kprobes:
  - call: "fd_install"
    syscall: false
    args:
    - index: 1
      type: "file"
    selectors:
    - matchArgs:
    - index: 1
      operator: "Prefix"
      values:
      - "/etc/shadow"
      - "/etc/passwd"
      - "/root/.ssh"

This alerts whenever any container tries to read /etc/shadow, /etc/passwd, or SSH keys. Real-time, zero overhead, no agent required.

Performance: Cilium vs iptables

Benchmarks from real production clusters:

Metriciptables (Calico)Cilium (eBPF)
Latency (P99)1.2ms0.4ms
Throughput8 Gbps12 Gbps
CPU per node15%8%
Rules at 5000 services25,000+ iptables rulesO(1) eBPF maps
L7 visibilityRequires sidecarNative

The performance gap widens as cluster size grows because eBPF map lookups are O(1) while iptables rule chains are O(n).

Production Checklist

Before running Cilium in production:

  • Linux kernel 5.10+ (5.15+ recommended for full features)
  • kube-proxy replacement enabled for best performance
  • Hubble enabled with appropriate retention
  • CiliumNetworkPolicies for critical namespaces
  • Monitoring CoreDNS through Cilium's DNS proxy
  • Tetragon for runtime security in sensitive workloads
  • Cilium connectivity test passing

Wrapping Up

Cilium is more than a CNI. It's a networking, observability, security, and service mesh platform — all powered by eBPF and running in the kernel. If you're starting a new cluster in 2026, Cilium should be your default choice.

The learning curve is real, but the payoff is massive: fewer components, better performance, and deeper visibility than any traditional networking stack.

Want to master Kubernetes networking from the ground up? KodeKloud's Kubernetes courses cover CNI plugins, network policies, and hands-on labs for production networking scenarios. For testing Cilium yourself, spin up a cluster on DigitalOcean Kubernetes — it supports custom CNI installations.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments