All Articles

Kubernetes vCluster Complete Guide: Virtual Clusters for Multi-Tenancy in 2026

Master vCluster — create lightweight virtual Kubernetes clusters inside your existing cluster. Covers setup, use cases, CI/CD ephemeral environments, and production patterns.

DevOpsBoysMar 19, 20266 min read
Share:Tweet

You have one Kubernetes cluster. You need to give 10 teams their own isolated environments. Creating 10 real clusters is expensive and painful to manage. Namespaces don't provide enough isolation. What do you do?

vCluster. It creates fully functional Kubernetes clusters inside your existing cluster — each team gets their own API server, their own resources, their own RBAC — all running as pods on the host cluster.

This guide covers everything you need to set up and run vClusters in production.

What Is vCluster?

vCluster (by Loft Labs) creates virtual Kubernetes clusters that run inside namespaces of a host cluster. Each vCluster has:

  • Its own API server — a separate control plane
  • Its own etcd/SQLite — independent state storage
  • Its own resources — CRDs, RBAC, namespaces
  • Synced workloads — pods actually run on the host cluster's nodes

Think of it as a Kubernetes cluster that's also a Kubernetes pod.

┌─────────────────────── Host Cluster ───────────────────────┐
│                                                             │
│  ┌─── vcluster-team-a ───┐  ┌─── vcluster-team-b ───┐    │
│  │  API Server            │  │  API Server            │    │
│  │  etcd (SQLite)         │  │  etcd (SQLite)         │    │
│  │  Syncer                │  │  Syncer                │    │
│  │                        │  │                        │    │
│  │  team-a sees:          │  │  team-b sees:          │    │
│  │  - their namespaces    │  │  - their namespaces    │    │
│  │  - their pods          │  │  - their pods          │    │
│  │  - their CRDs          │  │  - their CRDs          │    │
│  └────────────────────────┘  └────────────────────────┘    │
│                                                             │
│  Host admin sees: all vClusters as namespaces              │
└─────────────────────────────────────────────────────────────┘

Why vCluster Over Alternatives

ApproachIsolationCostComplexity
Separate clustersFullVery HighHigh
NamespacesWeakLowLow
vClusterStrongLowMedium
  • vs. Real clusters: vClusters share the host cluster's nodes and infrastructure — no extra VMs, no extra networking, 90% lower cost
  • vs. Namespaces: vClusters provide full API server isolation — teams can install their own CRDs, define their own RBAC, and don't see each other's resources
  • vs. Cluster API: vClusters create in seconds, not minutes — no infrastructure provisioning needed

Installing vCluster CLI

bash
# macOS
brew install loft-sh/tap/vcluster
 
# Linux
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
chmod +x vcluster
sudo mv vcluster /usr/local/bin/
 
# Verify
vcluster --version

Creating Your First vCluster

bash
vcluster create my-vcluster --namespace team-a

That's it. In about 30 seconds you have a fully functional Kubernetes cluster. The CLI automatically switches your kubeconfig:

bash
# You're now inside the vCluster
kubectl get namespaces
# NAME              STATUS   AGE
# default           Active   30s
# kube-system       Active   30s
# kube-public       Active   30s
 
kubectl get nodes
# NAME                          STATUS   ROLES    AGE   VERSION
# my-vcluster-0                 Ready    <none>   30s   v1.30.2

The vCluster sees its own clean Kubernetes environment — fresh namespaces, its own nodes (synced from host), no other teams' resources.

Customizing vClusters with values.yaml

yaml
# vcluster-values.yaml
vcluster:
  image: rancher/k3s:v1.30.2-k3s1
 
syncer:
  extraArgs:
  - --sync=ingresses
  - --sync=persistentvolumeclaims
  - --sync=storageclasses
 
sync:
  ingresses:
    enabled: true
  persistentvolumeclaims:
    enabled: true
  storageclasses:
    enabled: true
  nodes:
    enabled: true
    syncAllNodes: true
 
resources:
  limits:
    cpu: "1"
    memory: "1Gi"
  requests:
    cpu: "200m"
    memory: "256Mi"
 
isolation:
  enabled: true
  resourceQuota:
    enabled: true
    quota:
      requests.cpu: "4"
      requests.memory: "8Gi"
      limits.cpu: "8"
      limits.memory: "16Gi"
      pods: "50"
      services: "20"
      persistentvolumeclaims: "10"
  limitRange:
    enabled: true
    default:
      cpu: "500m"
      memory: "512Mi"
    defaultRequest:
      cpu: "100m"
      memory: "128Mi"
  networkPolicy:
    enabled: true

Create with custom values:

bash
vcluster create team-a-cluster \
  --namespace team-a \
  --values vcluster-values.yaml

This creates a vCluster with:

  • Resource quotas (max 8 CPU, 16Gi memory, 50 pods)
  • Default limit ranges for all containers
  • Network policies isolating the vCluster
  • Syncing for ingresses, PVCs, and storage classes

Use Case 1: CI/CD Ephemeral Environments

Create a fresh Kubernetes cluster for every pull request:

yaml
# .github/workflows/pr-test.yml
name: PR Integration Tests
on:
  pull_request:
    branches: [main]
 
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
 
    - name: Set up kubectl
      uses: azure/setup-kubectl@v4
 
    - name: Create ephemeral vCluster
      run: |
        curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
        chmod +x vcluster
        sudo mv vcluster /usr/local/bin/
        vcluster create pr-${{ github.event.number }} \
          --namespace ci-environments \
          --connect=false \
          --values ci-vcluster-values.yaml
 
    - name: Connect and deploy
      run: |
        vcluster connect pr-${{ github.event.number }} \
          --namespace ci-environments \
          --update-current=true
        kubectl apply -f manifests/
        kubectl wait --for=condition=ready pod -l app=my-app --timeout=120s
 
    - name: Run integration tests
      run: |
        APP_URL=$(kubectl get svc my-app -o jsonpath='{.spec.clusterIP}')
        npm run test:integration -- --url=http://$APP_URL:8080
 
    - name: Cleanup
      if: always()
      run: |
        vcluster delete pr-${{ github.event.number }} --namespace ci-environments

Every PR gets a real Kubernetes cluster, tests run in isolation, and the cluster is destroyed when done. No shared state, no flaky tests from leftover resources.

Use Case 2: Multi-Team Development

Give each team their own cluster with resource limits:

bash
# Team A — frontend, needs ingress
vcluster create team-frontend \
  --namespace team-frontend \
  --values frontend-values.yaml
 
# Team B — backend, needs databases
vcluster create team-backend \
  --namespace team-backend \
  --values backend-values.yaml
 
# Team C — data team, needs GPU access
vcluster create team-data \
  --namespace team-data \
  --values data-values.yaml

Each team gets:

  • Their own kubeconfig (distribute via Tailscale, Vault, or platform portal)
  • Their own RBAC — they're cluster-admin inside their vCluster
  • Their own CRDs — install Helm charts, operators, whatever they need
  • Resource limits enforced by the host cluster

Use Case 3: Testing Kubernetes Upgrades

Test a K8s version upgrade without touching production:

bash
# Create a vCluster running the new K8s version
vcluster create upgrade-test \
  --namespace upgrade-testing \
  --kubernetes-version=v1.31.0
 
# Deploy your production manifests
vcluster connect upgrade-test --namespace upgrade-testing
kubectl apply -f production-manifests/
 
# Run your test suite
./run-compatibility-tests.sh
 
# Clean up
vcluster delete upgrade-test --namespace upgrade-testing

This takes minutes instead of hours. No new VMs, no new networking, no risk to production.

Managing vClusters at Scale

List All vClusters

bash
vcluster list
 
# NAME              NAMESPACE        STATUS    AGE
# team-frontend     team-frontend    Running   5d
# team-backend      team-backend     Running   3d
# pr-142            ci-environments  Running   2h

Pause Inactive vClusters

Save resources by pausing vClusters that aren't being used:

bash
# Pause — stops the vCluster but preserves state
vcluster pause team-frontend --namespace team-frontend
 
# Resume — starts it back up with all state intact
vcluster resume team-frontend --namespace team-frontend

Auto-Sleep with Loft

Loft (the commercial product from the vCluster team) adds automatic sleep — vClusters that haven't received API requests for X minutes automatically pause and resume on next access.

Resource Overhead

Each vCluster runs as a pod on the host cluster. Typical resource usage:

ComponentCPUMemory
k3s API server100-200m128-256Mi
Syncer50-100m64-128Mi
SQLite (default)Minimal32-64Mi
Total per vCluster~200-400m~256-512Mi

For a host cluster with 100 CPU cores and 400Gi memory, you can run 50+ vClusters comfortably while leaving plenty of resources for actual workloads.

Compare that to 50 separate clusters, each needing 3 control plane nodes at $100-300/month each. vCluster saves $15,000-45,000/month in that scenario.

Security Best Practices

1. Enable Isolation Mode

yaml
isolation:
  enabled: true

This automatically creates:

  • ResourceQuota limiting what the vCluster can consume
  • LimitRange setting default resource limits
  • NetworkPolicy restricting pod-to-pod communication across vClusters

2. Restrict What Gets Synced

Only sync resources the vCluster actually needs:

yaml
sync:
  ingresses:
    enabled: true
  persistentvolumeclaims:
    enabled: true
  # Don't sync these
  nodes:
    enabled: false  # Don't expose host node info
  storageclasses:
    enabled: false

3. Use Separate Namespaces

Each vCluster should run in its own host namespace with RBAC restricting cross-namespace access.

Wrapping Up

vCluster solves the multi-tenancy problem in Kubernetes without the cost of separate clusters or the weakness of namespace isolation. Key takeaways:

  1. Each vCluster is a full Kubernetes cluster running as pods
  2. Create in seconds, destroy in seconds — perfect for CI/CD
  3. Teams get cluster-admin without affecting other teams
  4. 90% cheaper than separate clusters
  5. Works with existing Kubernetes tooling (Helm, ArgoCD, kubectl)

Start with one vCluster for your CI/CD pipeline. Once you see how fast and cheap it is, you'll want them everywhere.

Want to master Kubernetes multi-tenancy and cluster management? KodeKloud's advanced Kubernetes courses cover RBAC, resource management, and production patterns. For a managed Kubernetes cluster to test vCluster on, DigitalOcean Kubernetes is affordable and supports vCluster out of the box.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments