Kubernetes vCluster Complete Guide: Virtual Clusters for Multi-Tenancy in 2026
Master vCluster — create lightweight virtual Kubernetes clusters inside your existing cluster. Covers setup, use cases, CI/CD ephemeral environments, and production patterns.
You have one Kubernetes cluster. You need to give 10 teams their own isolated environments. Creating 10 real clusters is expensive and painful to manage. Namespaces don't provide enough isolation. What do you do?
vCluster. It creates fully functional Kubernetes clusters inside your existing cluster — each team gets their own API server, their own resources, their own RBAC — all running as pods on the host cluster.
This guide covers everything you need to set up and run vClusters in production.
What Is vCluster?
vCluster (by Loft Labs) creates virtual Kubernetes clusters that run inside namespaces of a host cluster. Each vCluster has:
- Its own API server — a separate control plane
- Its own etcd/SQLite — independent state storage
- Its own resources — CRDs, RBAC, namespaces
- Synced workloads — pods actually run on the host cluster's nodes
Think of it as a Kubernetes cluster that's also a Kubernetes pod.
┌─────────────────────── Host Cluster ───────────────────────┐
│ │
│ ┌─── vcluster-team-a ───┐ ┌─── vcluster-team-b ───┐ │
│ │ API Server │ │ API Server │ │
│ │ etcd (SQLite) │ │ etcd (SQLite) │ │
│ │ Syncer │ │ Syncer │ │
│ │ │ │ │ │
│ │ team-a sees: │ │ team-b sees: │ │
│ │ - their namespaces │ │ - their namespaces │ │
│ │ - their pods │ │ - their pods │ │
│ │ - their CRDs │ │ - their CRDs │ │
│ └────────────────────────┘ └────────────────────────┘ │
│ │
│ Host admin sees: all vClusters as namespaces │
└─────────────────────────────────────────────────────────────┘
Why vCluster Over Alternatives
| Approach | Isolation | Cost | Complexity |
|---|---|---|---|
| Separate clusters | Full | Very High | High |
| Namespaces | Weak | Low | Low |
| vCluster | Strong | Low | Medium |
- vs. Real clusters: vClusters share the host cluster's nodes and infrastructure — no extra VMs, no extra networking, 90% lower cost
- vs. Namespaces: vClusters provide full API server isolation — teams can install their own CRDs, define their own RBAC, and don't see each other's resources
- vs. Cluster API: vClusters create in seconds, not minutes — no infrastructure provisioning needed
Installing vCluster CLI
# macOS
brew install loft-sh/tap/vcluster
# Linux
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
chmod +x vcluster
sudo mv vcluster /usr/local/bin/
# Verify
vcluster --versionCreating Your First vCluster
vcluster create my-vcluster --namespace team-aThat's it. In about 30 seconds you have a fully functional Kubernetes cluster. The CLI automatically switches your kubeconfig:
# You're now inside the vCluster
kubectl get namespaces
# NAME STATUS AGE
# default Active 30s
# kube-system Active 30s
# kube-public Active 30s
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# my-vcluster-0 Ready <none> 30s v1.30.2The vCluster sees its own clean Kubernetes environment — fresh namespaces, its own nodes (synced from host), no other teams' resources.
Customizing vClusters with values.yaml
# vcluster-values.yaml
vcluster:
image: rancher/k3s:v1.30.2-k3s1
syncer:
extraArgs:
- --sync=ingresses
- --sync=persistentvolumeclaims
- --sync=storageclasses
sync:
ingresses:
enabled: true
persistentvolumeclaims:
enabled: true
storageclasses:
enabled: true
nodes:
enabled: true
syncAllNodes: true
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "200m"
memory: "256Mi"
isolation:
enabled: true
resourceQuota:
enabled: true
quota:
requests.cpu: "4"
requests.memory: "8Gi"
limits.cpu: "8"
limits.memory: "16Gi"
pods: "50"
services: "20"
persistentvolumeclaims: "10"
limitRange:
enabled: true
default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "100m"
memory: "128Mi"
networkPolicy:
enabled: trueCreate with custom values:
vcluster create team-a-cluster \
--namespace team-a \
--values vcluster-values.yamlThis creates a vCluster with:
- Resource quotas (max 8 CPU, 16Gi memory, 50 pods)
- Default limit ranges for all containers
- Network policies isolating the vCluster
- Syncing for ingresses, PVCs, and storage classes
Use Case 1: CI/CD Ephemeral Environments
Create a fresh Kubernetes cluster for every pull request:
# .github/workflows/pr-test.yml
name: PR Integration Tests
on:
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up kubectl
uses: azure/setup-kubectl@v4
- name: Create ephemeral vCluster
run: |
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
chmod +x vcluster
sudo mv vcluster /usr/local/bin/
vcluster create pr-${{ github.event.number }} \
--namespace ci-environments \
--connect=false \
--values ci-vcluster-values.yaml
- name: Connect and deploy
run: |
vcluster connect pr-${{ github.event.number }} \
--namespace ci-environments \
--update-current=true
kubectl apply -f manifests/
kubectl wait --for=condition=ready pod -l app=my-app --timeout=120s
- name: Run integration tests
run: |
APP_URL=$(kubectl get svc my-app -o jsonpath='{.spec.clusterIP}')
npm run test:integration -- --url=http://$APP_URL:8080
- name: Cleanup
if: always()
run: |
vcluster delete pr-${{ github.event.number }} --namespace ci-environmentsEvery PR gets a real Kubernetes cluster, tests run in isolation, and the cluster is destroyed when done. No shared state, no flaky tests from leftover resources.
Use Case 2: Multi-Team Development
Give each team their own cluster with resource limits:
# Team A — frontend, needs ingress
vcluster create team-frontend \
--namespace team-frontend \
--values frontend-values.yaml
# Team B — backend, needs databases
vcluster create team-backend \
--namespace team-backend \
--values backend-values.yaml
# Team C — data team, needs GPU access
vcluster create team-data \
--namespace team-data \
--values data-values.yamlEach team gets:
- Their own kubeconfig (distribute via Tailscale, Vault, or platform portal)
- Their own RBAC — they're cluster-admin inside their vCluster
- Their own CRDs — install Helm charts, operators, whatever they need
- Resource limits enforced by the host cluster
Use Case 3: Testing Kubernetes Upgrades
Test a K8s version upgrade without touching production:
# Create a vCluster running the new K8s version
vcluster create upgrade-test \
--namespace upgrade-testing \
--kubernetes-version=v1.31.0
# Deploy your production manifests
vcluster connect upgrade-test --namespace upgrade-testing
kubectl apply -f production-manifests/
# Run your test suite
./run-compatibility-tests.sh
# Clean up
vcluster delete upgrade-test --namespace upgrade-testingThis takes minutes instead of hours. No new VMs, no new networking, no risk to production.
Managing vClusters at Scale
List All vClusters
vcluster list
# NAME NAMESPACE STATUS AGE
# team-frontend team-frontend Running 5d
# team-backend team-backend Running 3d
# pr-142 ci-environments Running 2hPause Inactive vClusters
Save resources by pausing vClusters that aren't being used:
# Pause — stops the vCluster but preserves state
vcluster pause team-frontend --namespace team-frontend
# Resume — starts it back up with all state intact
vcluster resume team-frontend --namespace team-frontendAuto-Sleep with Loft
Loft (the commercial product from the vCluster team) adds automatic sleep — vClusters that haven't received API requests for X minutes automatically pause and resume on next access.
Resource Overhead
Each vCluster runs as a pod on the host cluster. Typical resource usage:
| Component | CPU | Memory |
|---|---|---|
| k3s API server | 100-200m | 128-256Mi |
| Syncer | 50-100m | 64-128Mi |
| SQLite (default) | Minimal | 32-64Mi |
| Total per vCluster | ~200-400m | ~256-512Mi |
For a host cluster with 100 CPU cores and 400Gi memory, you can run 50+ vClusters comfortably while leaving plenty of resources for actual workloads.
Compare that to 50 separate clusters, each needing 3 control plane nodes at $100-300/month each. vCluster saves $15,000-45,000/month in that scenario.
Security Best Practices
1. Enable Isolation Mode
isolation:
enabled: trueThis automatically creates:
- ResourceQuota limiting what the vCluster can consume
- LimitRange setting default resource limits
- NetworkPolicy restricting pod-to-pod communication across vClusters
2. Restrict What Gets Synced
Only sync resources the vCluster actually needs:
sync:
ingresses:
enabled: true
persistentvolumeclaims:
enabled: true
# Don't sync these
nodes:
enabled: false # Don't expose host node info
storageclasses:
enabled: false3. Use Separate Namespaces
Each vCluster should run in its own host namespace with RBAC restricting cross-namespace access.
Wrapping Up
vCluster solves the multi-tenancy problem in Kubernetes without the cost of separate clusters or the weakness of namespace isolation. Key takeaways:
- Each vCluster is a full Kubernetes cluster running as pods
- Create in seconds, destroy in seconds — perfect for CI/CD
- Teams get cluster-admin without affecting other teams
- 90% cheaper than separate clusters
- Works with existing Kubernetes tooling (Helm, ArgoCD, kubectl)
Start with one vCluster for your CI/CD pipeline. Once you see how fast and cheap it is, you'll want them everywhere.
Want to master Kubernetes multi-tenancy and cluster management? KodeKloud's advanced Kubernetes courses cover RBAC, resource management, and production patterns. For a managed Kubernetes cluster to test vCluster on, DigitalOcean Kubernetes is affordable and supports vCluster out of the box.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Edge Computing Will Decentralize Kubernetes by 2028
Why Kubernetes is moving from centralized cloud clusters to distributed edge deployments. Covers KubeEdge, k3s, Akri, and the architectural shift toward edge-native infrastructure.
How to Set Up Backstage Internal Developer Portal from Scratch in 2026
Backstage is the open-source Internal Developer Portal (IDP) from Spotify, now used by Netflix, LinkedIn, and thousands of engineering teams. This step-by-step guide shows you how to deploy it, add your services, and integrate it with GitHub and Kubernetes.
How to Set Up Crossplane for Self-Service Infrastructure on Kubernetes
A step-by-step tutorial on setting up Crossplane to provision and manage cloud infrastructure directly from Kubernetes. Build a self-service platform where developers can request AWS, GCP, or Azure resources through kubectl.