What is LimitRange and ResourceQuota in Kubernetes? Explained Simply
LimitRange and ResourceQuota in Kubernetes explained from scratch — what they do, how they differ, and how to set them up with real examples.
You've set up a Kubernetes cluster. One developer deploys a pod that requests 32GB of RAM and accidentally brings down half the cluster. Another team's pods run with no limits at all and starve other workloads.
Both problems are solved by two Kubernetes objects: LimitRange and ResourceQuota.
They're often confused with each other. This post explains both, clearly.
The Problem They Solve
Without limits, Kubernetes has no guardrails:
- A pod can request 0 CPU (and get whatever's free)
- A pod can request 100 cores (and starve everything else)
- A namespace can run 500 pods and exhaust the cluster
- Containers with no limits can OOMKill neighbors by consuming all available memory
LimitRange and ResourceQuota are the two tools that prevent this chaos.
LimitRange — Limits Per Pod/Container
A LimitRange sets default and maximum resource values for individual pods and containers within a namespace.
Think of it as: "Each container in this namespace can use at most X CPU and Y memory."
What LimitRange Controls
- Default CPU/memory requests (if a container doesn't set them)
- Default CPU/memory limits (if a container doesn't set them)
- Maximum CPU/memory a single container/pod can request
- Minimum CPU/memory a container must request
Creating a LimitRange
apiVersion: v1
kind: LimitRange
metadata:
name: dev-limits
namespace: development
spec:
limits:
- type: Container
default: # Applied if container sets no limits
cpu: "500m"
memory: "256Mi"
defaultRequest: # Applied if container sets no requests
cpu: "100m"
memory: "128Mi"
max: # Hard ceiling — no container can exceed this
cpu: "2"
memory: "1Gi"
min: # Floor — containers must request at least this
cpu: "50m"
memory: "64Mi"Apply it:
kubectl apply -f limitrange.yaml
kubectl describe limitrange dev-limits -n developmentWhat Happens After You Apply It
Any new pod created in the development namespace that doesn't set resource limits/requests gets the defaults automatically:
# Developer writes this (no resources set):
containers:
- name: app
image: nginx
# Kubernetes applies this automatically:
containers:
- name: app
image: nginx
resources:
requests:
cpu: "100m" # defaultRequest
memory: "128Mi"
limits:
cpu: "500m" # default
memory: "256Mi"If a developer tries to request more than the max:
resources:
requests:
cpu: "4" # Exceeds max of "2"The pod is rejected:
Error from server (Forbidden): maximum cpu usage per Container is 2, but limit is 4.
LimitRange for Pods and PersistentVolumeClaims
You can also limit at the Pod level (sum of all containers) and PVC size:
spec:
limits:
- type: Pod
max:
cpu: "4"
memory: "4Gi"
- type: PersistentVolumeClaim
max:
storage: "50Gi"
min:
storage: "1Gi"ResourceQuota — Limits Per Namespace
A ResourceQuota sets the total resources a namespace is allowed to consume — across all pods combined.
Think of it as: "The entire development namespace can use at most 10 CPU cores and 20GB RAM total."
What ResourceQuota Controls
- Total CPU and memory across all pods in the namespace
- Maximum number of pods, services, deployments, etc.
- Total PVC storage requested
Creating a ResourceQuota
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: development
spec:
hard:
# Compute resources
requests.cpu: "4" # Total CPU requests across all pods
requests.memory: "8Gi" # Total memory requests
limits.cpu: "8" # Total CPU limits
limits.memory: "16Gi" # Total memory limits
# Object counts
pods: "20" # Max 20 pods in this namespace
services: "10"
deployments.apps: "10"
configmaps: "20"
secrets: "20"
persistentvolumeclaims: "10"
# Storage
requests.storage: "100Gi" # Total PVC storageApply it:
kubectl apply -f resourcequota.yaml
kubectl describe resourcequota dev-quota -n developmentOutput:
Name: dev-quota
Namespace: development
Resource Used Hard
-------- ---- ----
limits.cpu 1500m 8
limits.memory 1Gi 16Gi
pods 3 20
requests.cpu 300m 4
requests.memory 512Mi 8Gi
When the Quota is Exceeded
Error from server (Forbidden): exceeded quota: dev-quota,
requested: limits.cpu=2, used: limits.cpu=7, limited: limits.cpu=8
New pods are rejected until existing pods are removed or scaled down.
LimitRange vs ResourceQuota — The Key Difference
| LimitRange | ResourceQuota | |
|---|---|---|
| Scope | Per container/pod | Per namespace (total) |
| Controls | Max/min/default per resource unit | Total budget for namespace |
| Enforces | Individual pod behavior | Namespace-wide capacity |
| Use case | Prevent runaway pods | Enforce fair-share between teams |
Use both together:
- LimitRange ensures every container has sane defaults and can't go rogue
- ResourceQuota ensures no single namespace monopolizes the cluster
Practical Example — Multi-Team Setup
Imagine you have three teams sharing one cluster:
# Create namespaces per team
kubectl create namespace team-alpha
kubectl create namespace team-beta
kubectl create namespace team-gammaApply per-namespace quotas:
# team-alpha gets more resources (larger team)
apiVersion: v1
kind: ResourceQuota
metadata:
name: quota
namespace: team-alpha
spec:
hard:
requests.cpu: "8"
requests.memory: "16Gi"
pods: "50"
---
# team-beta is a smaller team
apiVersion: v1
kind: ResourceQuota
metadata:
name: quota
namespace: team-beta
spec:
hard:
requests.cpu: "4"
requests.memory: "8Gi"
pods: "20"Each team has a resource budget. Team Beta can't accidentally consume resources allocated to Team Alpha.
Important: ResourceQuota + LimitRange Must Work Together
If you apply a ResourceQuota to a namespace, every pod must have resource requests and limits set. Otherwise the pod is rejected:
Error: must specify limits.cpu since the ResourceQuota requires it
This is why you set a LimitRange alongside — it adds default values so pods without explicit limits still get created.
LimitRange → adds defaults to pods missing resource specs
ResourceQuota → enforces totals, requires all pods to have specs
Together → no pod slips through without resource accounting
Viewing Current Usage
# Check quota usage
kubectl describe resourcequota -n development
# Check all quotas in cluster
kubectl get resourcequota -A
# Check limit ranges
kubectl get limitrange -n development
kubectl describe limitrange dev-limits -n developmentCommon Mistakes
1. Setting ResourceQuota without LimitRange
Pods without resource specs fail to schedule. Always pair them.
2. Setting max memory too low
If the default limit is 256Mi but your app needs 512Mi to start, all pods fail. Start with generous limits and tighten over time.
3. Forgetting about init containers
Init containers also count against LimitRange. If your init container exceeds the max, the pod is rejected.
4. Not setting quotas on system namespaces
Apply quotas only to team namespaces, not kube-system. Restricting system namespaces can break cluster operations.
Summary
LimitRange: Controls each individual container/pod — sets defaults and max/min resource values.
ResourceQuota: Controls the namespace's total budget — prevents one team from consuming all cluster resources.
Use them together:
- LimitRange ensures every pod has resource specs
- ResourceQuota enforces the total namespace budget
These two objects are foundational for running multi-tenant Kubernetes clusters safely.
Practice resource management on a real cluster — KodeKloud has dedicated Kubernetes labs covering LimitRange and ResourceQuota. Spin up a test cluster on DigitalOcean with $200 free credit.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Build a Kubernetes Cluster with kubeadm from Scratch (2026)
Step-by-step guide to building a real multi-node Kubernetes cluster using kubeadm — no managed services, no shortcuts.
How to Build a DevOps Home Lab for Free in 2026
You don't need expensive hardware to practice DevOps. Here's how to build a complete home lab with Kubernetes, CI/CD, and monitoring using free tools and cloud free tiers.
How to Crack the CKA Exam in 2026: Study Plan, Resources, and Tips
Complete CKA exam prep guide for 2026 — what to study, how to practice, which resources actually help, and tips to pass on the first attempt.