All Articles

Cilium and eBPF Networking — Complete Guide for DevOps Engineers (2026)

Everything you need to know about Cilium, the eBPF-powered CNI for Kubernetes. Covers architecture, installation, network policies, observability with Hubble, and replacing kube-proxy.

DevOpsBoysMar 26, 20266 min read
Share:Tweet

Cilium has gone from an experimental CNI plugin to the default networking choice for production Kubernetes clusters. AWS EKS, Google GKE, and Azure AKS all offer native Cilium support. It's the only CNI that's both a CNCF graduated project and backed by a major acquisition (Isovalent by Cisco).

If you're running Kubernetes in 2026 and not using Cilium, you're leaving performance, security, and observability on the table. This guide covers everything you need to know.

What Is Cilium?

Cilium is a Kubernetes CNI (Container Network Interface) plugin that uses eBPF (extended Berkeley Packet Filter) to provide networking, security, and observability — all at the Linux kernel level.

Traditional CNIs like Calico and Flannel use iptables rules for packet filtering and routing. Cilium replaces iptables entirely with eBPF programs that run directly in the kernel, resulting in:

  • Better performance — eBPF processes packets faster than iptables chain traversal
  • Lower latency — especially at scale (thousands of services)
  • L7 visibility — Cilium can inspect HTTP, gRPC, Kafka, and DNS traffic without sidecars
  • Identity-based security — policies based on pod identity, not IP addresses

Architecture Overview

┌─────────────────────────────────────────────┐
│                 User Space                    │
│  ┌──────────┐  ┌──────────┐  ┌───────────┐  │
│  │ Cilium   │  │ Hubble   │  │ Cilium    │  │
│  │ Agent    │  │ (Observe) │  │ Operator  │  │
│  └──────────┘  └──────────┘  └───────────┘  │
├─────────────────────────────────────────────┤
│                 Kernel Space                  │
│  ┌──────────────────────────────────────┐    │
│  │          eBPF Programs                │    │
│  │  ┌─────────┐ ┌──────────┐ ┌───────┐  │    │
│  │  │ Network │ │ Security │ │ L7    │  │    │
│  │  │ Routing │ │ Policies │ │ Parse │  │    │
│  │  └─────────┘ └──────────┘ └───────┘  │    │
│  └──────────────────────────────────────┘    │
└─────────────────────────────────────────────┘

Key components:

  • Cilium Agent — runs as a DaemonSet on every node, manages eBPF programs
  • Cilium Operator — handles cluster-wide operations (IPAM, CRD management)
  • Hubble — observability layer that provides flow logs, service maps, and metrics
  • eBPF Programs — compiled and loaded into the kernel for packet processing

Installing Cilium

Prerequisites

  • Kubernetes 1.27+ cluster
  • Linux kernel 5.10+ (for full eBPF feature support)
  • No other CNI installed (or you'll need to migrate)
bash
# Add Cilium Helm repo
helm repo add cilium https://helm.cilium.io/
helm repo update
 
# Install Cilium
helm install cilium cilium/cilium --version 1.16.5 \
  --namespace kube-system \
  --set kubeProxyReplacement=true \
  --set k8sServiceHost=${API_SERVER_IP} \
  --set k8sServicePort=${API_SERVER_PORT} \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true

Verify Installation

bash
# Install Cilium CLI
curl -L https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz | tar xz
sudo mv cilium /usr/local/bin/
 
# Check status
cilium status --wait

Expected output:

    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble Relay:   OK
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

Deployment             cilium-operator    Desired: 1, Ready: 1/1
DaemonSet              cilium             Desired: 3, Ready: 3/3
Deployment             hubble-relay       Desired: 1, Ready: 1/1

Run the connectivity test:

bash
cilium connectivity test

This deploys test pods and validates DNS, L3/L4 connectivity, L7 policies, and encryption.

Replacing kube-proxy with Cilium

One of Cilium's biggest advantages is completely replacing kube-proxy. kube-proxy uses iptables rules for service load balancing — which gets slow with thousands of services (O(n) rule traversal). Cilium uses eBPF hash maps for O(1) lookups.

Performance Difference

Metrickube-proxy (iptables)Cilium eBPF
Service lookupO(n) rulesO(1) hash map
Rule update (1000 services)~5 seconds~50ms
Connection trackingconntrack tableeBPF map
Latency overhead~0.5ms per hop~0.1ms per hop

Enable kube-proxy Replacement

If you installed Cilium with kubeProxyReplacement=true, kube-proxy is already replaced. Remove the kube-proxy DaemonSet:

bash
kubectl -n kube-system delete daemonset kube-proxy
, or disable it
kubectl -n kube-system patch daemonset kube-proxy -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'

On managed clusters like EKS:

bash
# EKS: Install Cilium without kube-proxy from the start
eksctl create cluster --name my-cluster --without-nodegroup
# Then install Cilium with kubeProxyReplacement=true before adding nodes

Network Policies — Identity-Based Security

Cilium's network policies are far more powerful than standard Kubernetes NetworkPolicies. They operate on pod identity (labels), not IP addresses, and support L7 filtering.

Standard L3/L4 Policy

yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-frontend-to-api
  namespace: production
spec:
  endpointSelector:
    matchLabels:
      app: api-server
  ingress:
    - fromEndpoints:
        - matchLabels:
            app: frontend
      toPorts:
        - ports:
            - port: "8080"
              protocol: TCP

L7 HTTP Policy (No Sidecar Needed!)

This is where Cilium shines. You can filter by HTTP method, path, and headers — without deploying a service mesh:

yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: api-l7-policy
  namespace: production
spec:
  endpointSelector:
    matchLabels:
      app: api-server
  ingress:
    - fromEndpoints:
        - matchLabels:
            app: frontend
      toPorts:
        - ports:
            - port: "8080"
              protocol: TCP
          rules:
            http:
              - method: GET
                path: "/api/v1/products"
              - method: POST
                path: "/api/v1/orders"

This allows the frontend to only GET /api/v1/products and POST /api/v1/orders — any other request is denied. No Istio, no Envoy sidecar, no additional resource overhead.

DNS-Based Policy

Allow pods to only reach specific external domains:

yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: external-access
spec:
  endpointSelector:
    matchLabels:
      app: payment-service
  egress:
    - toFQDNs:
        - matchName: "api.stripe.com"
        - matchName: "api.paypal.com"
      toPorts:
        - ports:
            - port: "443"
              protocol: TCP
    - toEndpoints:
        - matchLabels:
            k8s:io.kubernetes.pod.namespace: kube-system
            k8s-app: kube-dns
      toPorts:
        - ports:
            - port: "53"
              protocol: UDP

Hubble — Deep Observability Without Sidecars

Hubble is Cilium's observability platform. It gives you:

  • Flow logs — every packet between pods, with L7 metadata
  • Service dependency maps — visual graph of service communication
  • Metrics — Prometheus-compatible metrics for network flows
  • DNS visibility — every DNS query and response

Using Hubble CLI

bash
# Install Hubble CLI
export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz | tar xz
sudo mv hubble /usr/local/bin/
 
# Port-forward Hubble Relay
cilium hubble port-forward &
 
# Observe all flows
hubble observe
 
# Filter by namespace
hubble observe --namespace production
 
# Filter by pod
hubble observe --to-pod production/api-server
 
# Show only dropped packets (policy violations)
hubble observe --verdict DROPPED
 
# Show HTTP flows with status codes
hubble observe --protocol http -o json | jq '.flow.l7.http'

Hubble UI

Access the visual service map:

bash
cilium hubble ui

This opens a browser with a real-time dependency graph showing:

  • Which services talk to each other
  • Request rates and latencies
  • Error rates per connection
  • Policy verdicts (allowed/denied)

Hubble Prometheus Metrics

yaml
# In Cilium Helm values
hubble:
  metrics:
    enabled:
      - dns
      - drop
      - tcp
      - flow
      - icmp
      - http
    serviceMonitor:
      enabled: true

Key metrics:

  • hubble_flows_processed_total — total flows
  • hubble_drop_total{reason="POLICY_DENIED"} — policy drops
  • hubble_http_requests_total{method="GET",status="200"} — HTTP metrics per endpoint
  • hubble_dns_queries_total — DNS query volume

Transparent Encryption with WireGuard

Cilium can encrypt all pod-to-pod traffic using WireGuard — no application changes, no certificates to manage:

yaml
# In Cilium Helm values
encryption:
  enabled: true
  type: wireguard
  nodeEncryption: true  # Also encrypts node-to-node traffic

That's it. All inter-pod traffic is now encrypted at the kernel level with WireGuard. No mTLS certificates, no sidecar proxies, no performance degradation (WireGuard adds ~5% overhead vs ~15% for Istio mTLS).

Cilium vs Traditional CNIs

FeatureCalicoFlannelCilium
eBPF-nativePartialNoYes
kube-proxy replacementNoNoYes
L7 policies (no sidecar)NoNoYes
Built-in observabilityNoNoHubble
Transparent encryptionIPsecNoWireGuard
DNS-based policiesNoNoYes
Bandwidth managementNoNoYes (EDT)
Multi-cluster (ClusterMesh)YesNoYes

Production Checklist

Before going to production with Cilium:

bash
# 1. Verify kernel version (5.10+ recommended)
uname -r
 
# 2. Run connectivity tests
cilium connectivity test
 
# 3. Enable Hubble metrics for monitoring
# (see Helm values above)
 
# 4. Set up default-deny network policy
cat <<EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
  name: default-deny
spec:
  endpointSelector: {}
  ingress:
    - fromEntities:
        - cluster
  egress:
    - toEntities:
        - cluster
    - toEndpoints:
        - matchLabels:
            k8s:io.kubernetes.pod.namespace: kube-system
            k8s-app: kube-dns
      toPorts:
        - ports:
            - port: "53"
              protocol: UDP
EOF
 
# 5. Enable WireGuard encryption
# (see Helm values above)
 
# 6. Monitor Cilium agent health
kubectl -n kube-system exec ds/cilium -- cilium status --verbose

Learn More

Cilium's power comes from eBPF, and understanding eBPF fundamentals will make you a stronger Kubernetes networking engineer. For hands-on practice with Kubernetes networking and Cilium, check out KodeKloud's Kubernetes courses — they offer real lab environments where you can experiment safely.

If you need a Kubernetes cluster to practice on, DigitalOcean's managed Kubernetes is one of the most affordable options and supports custom CNI installations including Cilium.


Cilium isn't just a CNI — it's a platform. The earlier you adopt it, the more you'll benefit from its networking, security, and observability features.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments