🎉 DevOps Interview Prep Bundle is live — 1000+ Q&A across 20 topicsGet it →
All Articles

Kaniko vs BuildKit vs Docker — Container Image Build Tools Compared (2026)

Building Docker images in Kubernetes CI/CD? Kaniko, BuildKit, and Docker-in-Docker all do it differently. Here's which one to use and why.

DevOpsBoysApr 30, 20264 min read
Share:Tweet

Building container images inside Kubernetes CI/CD pipelines is tricky because the standard Docker daemon requires privileged mode — a security nightmare in shared clusters. Here's how Kaniko, BuildKit, and Docker-in-Docker compare.


The Problem: Docker-in-Docker (DinD)

The naive approach is running Docker inside your CI pod. It requires privileged: true which gives the container full access to the host.

yaml
# Don't do this in production
containers:
- name: docker
  image: docker:24-dind
  securityContext:
    privileged: true   # full host access = security nightmare

In shared clusters, one malicious or compromised build job can escape to the host and access other tenants' data. Don't use DinD in shared clusters.


Kaniko

Kaniko builds Docker images entirely in userspace without needing Docker daemon or privileged access.

How it works: Kaniko executes each Dockerfile instruction, snapshots the filesystem after each command, and creates image layers. All in a regular (non-privileged) container.

Basic Kaniko job in Kubernetes:

yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: build-myapp
spec:
  template:
    spec:
      containers:
      - name: kaniko
        image: gcr.io/kaniko-project/executor:latest
        args:
        - "--dockerfile=Dockerfile"
        - "--context=git://github.com/myorg/myapp"
        - "--destination=myregistry.com/myapp:latest"
        - "--cache=true"
        - "--cache-repo=myregistry.com/myapp/cache"
        env:
        - name: DOCKER_CONFIG
          value: /kaniko/.docker
        volumeMounts:
        - name: docker-config
          mountPath: /kaniko/.docker
      volumes:
      - name: docker-config
        secret:
          secretName: registry-credentials
      restartPolicy: Never

In GitHub Actions (Kubernetes runner):

yaml
- name: Build with Kaniko
  uses: aevea/action-kaniko@master
  with:
    image: myregistry.com/myapp
    tag: ${{ github.sha }}
    registry: myregistry.com
    username: ${{ secrets.REGISTRY_USER }}
    password: ${{ secrets.REGISTRY_PASSWORD }}

Pros:

  • No privileged access required
  • Works in any standard Kubernetes pod
  • Native layer caching
  • Widely supported in enterprise environments

Cons:

  • Slower than BuildKit (can't use bind mounts or cache mounts)
  • No support for --mount=type=cache Dockerfile features
  • Each build requires downloading base images (mitigated with cache registry)

BuildKit (Buildkitd)

BuildKit is the modern Docker build engine — it's what runs docker buildx build. You can also run it as a standalone daemon (buildkitd) in Kubernetes.

Deploy buildkitd as a DaemonSet:

yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: buildkitd
  namespace: ci
spec:
  selector:
    matchLabels:
      app: buildkitd
  template:
    metadata:
      labels:
        app: buildkitd
    spec:
      containers:
      - name: buildkitd
        image: moby/buildkit:latest
        args:
        - --addr
        - unix:///run/buildkit/buildkitd.sock
        - --addr
        - tcp://0.0.0.0:1234
        securityContext:
          privileged: true  # still needs privileged on the daemon
        volumeMounts:
        - name: buildkitd-socket
          mountPath: /run/buildkit
      volumes:
      - name: buildkitd-socket
        hostPath:
          path: /run/buildkit
          type: DirectoryOrCreate

Use from CI pod (no privileged required):

yaml
- name: build
  image: moby/buildkit:latest
  command:
  - buildctl
  args:
  - --addr=tcp://buildkitd.ci.svc.cluster.local:1234
  - build
  - --frontend=dockerfile.v0
  - --local context=.
  - --local dockerfile=.
  - --output type=image,name=myregistry.com/myapp:latest,push=true

With rootless BuildKit (no privileged anywhere):

bash
# Use the rootless variant
image: moby/buildkit:latest-rootless

Pros:

  • Fastest build speeds
  • Full Dockerfile feature support (cache mounts, SSH mounts)
  • Parallel build stages
  • Best cache performance

Cons:

  • More complex setup
  • DaemonSet still needs privileged (unless rootless mode)
  • More moving parts in the cluster

Docker Buildx with Remote Builder (Easiest Option)

Run a BuildKit builder outside the cluster and connect to it via TCP:

bash
# Create a remote builder on a dedicated build VM
docker buildx create \
  --name remote-builder \
  --driver remote \
  tcp://build-vm.internal:1234
 
# Use from CI
docker buildx build \
  --builder remote-builder \
  --push \
  -t myregistry.com/myapp:latest .

Feature Comparison

FeatureKanikoBuildKitDocker DinD
Privileged required❌ (rootless) / ✅ (daemon)
Build speedMediumFastFast
Cache mounts
Multi-platform builds
Setup complexityLowMediumLow
SecurityHighHigh (rootless)Low
Kubernetes-native⚠️

Which One to Use

Kaniko — best for most teams. No privileged access, simple job-based setup, works with any registry. Minor performance penalty is acceptable for security.

BuildKit (rootless) — best if you need cache mounts or multi-platform builds and can accept the setup complexity. Use rootless mode for production.

Docker DinD — only acceptable for single-tenant or self-hosted runners where you control the host. Never in shared multi-tenant clusters.


Quick Start: Kaniko + ECR in GitHub Actions

yaml
- name: Configure AWS credentials
  uses: aws-actions/configure-aws-credentials@v4
  with:
    role-to-assume: arn:aws:iam::123456789:role/github-actions
    aws-region: us-east-1
 
- name: Login to ECR
  run: |
    aws ecr get-login-password | \
    docker login --username AWS --password-stdin \
    123456789.dkr.ecr.us-east-1.amazonaws.com
 
- name: Build and push with Kaniko
  run: |
    docker run \
      -v $(pwd):/workspace \
      -v $HOME/.docker/config.json:/kaniko/.docker/config.json:ro \
      gcr.io/kaniko-project/executor:latest \
      --dockerfile /workspace/Dockerfile \
      --context /workspace \
      --destination 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:${{ github.sha }} \
      --cache \
      --cache-repo 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp/cache
Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments