Kubernetes VPA Keeps Evicting Pods? Fix It with In-Place Resize in 1.35
VPA restarting your pods every time it adjusts resources? Here's how to stop the evictions using Kubernetes 1.35's In-Place Pod Resize feature.
You set up Vertical Pod Autoscaler to right-size your pods. It's working — recommending better CPU and memory values. But there's a problem: every time VPA adjusts resources, it evicts your pods and restarts them.
For stateless web services, maybe that's fine. But for JVM apps with 2-minute warm-up times, WebSocket servers with active connections, or batch jobs processing data — those evictions are painful.
The fix is here: Kubernetes 1.35 shipped In-Place Pod Resize as GA, and VPA now supports InPlaceOrRecreate mode. Let me show you how to stop the evictions.
Why VPA Evicts Pods
Before Kubernetes 1.35, there was literally no way to change a pod's CPU or memory without recreating it. The Kubernetes API didn't allow modifying spec.containers[*].resources on a running pod.
So VPA had only one option:
- Calculate new resource recommendation
- Evict the pod (delete it)
- Admission controller intercepts the new pod
- Sets the recommended resources on the new pod
- Pod starts fresh with new resources
This worked, but it meant:
- Downtime for single-replica deployments
- Cold starts for apps that need warm-up time
- Connection drops for WebSocket/gRPC/long-lived connections
- Lost in-memory state for caching layers
- Wasted computation for batch jobs mid-processing
The Fix: InPlaceOrRecreate Mode
Step 1 — Verify Your Kubernetes Version
In-Place Pod Resize requires Kubernetes 1.35+:
kubectl version --shortIf you're on an older version, you'll need to upgrade your cluster first. On managed Kubernetes:
# EKS
eksctl upgrade cluster --name my-cluster --version 1.35
# GKE
gcloud container clusters upgrade my-cluster --master --cluster-version 1.35
# AKS
az aks upgrade --resource-group myRG --name myCluster --kubernetes-version 1.35.0Step 2 — Add Resize Policy to Your Pods
Update your Deployment to include resizePolicy on each container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:latest
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
resizePolicy:
- resourceName: cpu
restartPolicy: NotRequired
- resourceName: memory
restartPolicy: NotRequiredThe restartPolicy: NotRequired tells Kubernetes that the container doesn't need to restart when these resources change. The kernel just updates the cgroup limits on the fly.
Apply the updated deployment:
kubectl apply -f deployment.yamlStep 3 — Update VPA to InPlaceOrRecreate
Change your VPA's updateMode from Auto (which defaults to Recreate) to InPlaceOrRecreate:
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-app-vpa
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
updatePolicy:
updateMode: "InPlaceOrRecreate"
resourcePolicy:
containerPolicies:
- containerName: app
minAllowed:
cpu: 50m
memory: 128Mi
maxAllowed:
cpu: 2000m
memory: 4GiApply:
kubectl apply -f vpa.yamlStep 4 — Verify It's Working
Watch for resize events instead of evictions:
# Check if pods are being resized (not restarted)
kubectl get pods -l app=my-app -wWith the old behavior, you'd see pods terminating and new ones creating. With in-place resize, the pods stay running.
Check the resize status:
kubectl get pods -l app=my-app -o jsonpath='{range .items[*]}{.metadata.name}: restarts={.status.containerStatuses[0].restartCount}, resize={.status.resize}{"\n"}{end}'If restartCount stays the same and resize is empty (meaning completed), in-place resize is working.
Check VPA recommendations vs actual:
# What VPA recommends
kubectl get vpa my-app-vpa -o jsonpath='{.status.recommendation.containerRecommendations[0].target}'
# What pods actually have
kubectl get pods -l app=my-app -o jsonpath='{range .items[*]}{.metadata.name}: cpu={.spec.containers[0].resources.requests.cpu}, mem={.spec.containers[0].resources.requests.memory}{"\n"}{end}'Troubleshooting
Pods Still Getting Evicted
Check 1: Is resizePolicy set?
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].resizePolicy}'If empty, pods created before adding resizePolicy won't support in-place resize. Do a rolling restart to pick up the new spec:
kubectl rollout restart deployment/my-appCheck 2: Is VPA updated to InPlaceOrRecreate?
kubectl get vpa my-app-vpa -o jsonpath='{.spec.updatePolicy.updateMode}'Must show InPlaceOrRecreate, not Auto or Recreate.
Check 3: Is the node at capacity?
If the node can't accommodate the resize, VPA falls back to evict-and-recreate (scheduling on a different node with more room):
kubectl describe node <node-name> | grep -A 8 "Allocated resources"Resize Stuck in "Deferred"
This means the node doesn't have enough allocatable resources for the resize:
kubectl get pods -l app=my-app -o jsonpath='{range .items[*]}{.metadata.name}: resize={.status.resize}{"\n"}{end}'Solutions:
- Wait — VPA will retry when resources become available
- Scale up node pool to add capacity
- Lower
maxAllowedin VPA to prevent oversized recommendations
Memory Resize Failing
Decreasing memory limits only works if the container is using less memory than the new limit. If current usage exceeds the target, the resize is infeasible.
Check current usage:
kubectl top pod -l app=my-appIf a pod is using 400Mi and VPA wants to set the limit to 300Mi, that resize will fail. VPA will need to evict and recreate in that case (which is why the mode is InPlaceOrRecreate, not just InPlace).
Before and After
| Behavior | Old (Recreate) | New (InPlaceOrRecreate) |
|---|---|---|
| Resource change | Pod evicted + recreated | Cgroup limits updated live |
| Downtime | Yes (brief) | None |
| Restart count | Increases | Unchanged |
| Active connections | Dropped | Preserved |
| JVM warm caches | Lost | Preserved |
| Batch job progress | Lost | Preserved |
| Fallback if resize fails | N/A | Evict + recreate |
Wrapping Up
If you've been avoiding VPA because of the eviction problem, that problem is now solved. Update your pods with resizePolicy, switch VPA to InPlaceOrRecreate, and your pods will get right-sized resources without restarts.
The combination of VPA + In-Place Resize gives Kubernetes truly seamless vertical autoscaling. No more choosing between right-sized resources and application stability.
Want to master Kubernetes autoscaling, resource management, and production troubleshooting? The KodeKloud Kubernetes course covers VPA, HPA, and resource optimization with hands-on labs. For a managed cluster to test in-place resize, DigitalOcean Kubernetes supports the latest Kubernetes versions.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
AWS EKS Pods Stuck in Pending State: Causes and Fixes
Pods stuck in Pending on EKS are caused by a handful of known issues — insufficient node capacity, taint mismatches, PVC problems, and more. Here's how to diagnose and fix each one.
cert-manager Certificate Not Ready: Causes and Fixes
cert-manager Certificate stuck in a non-Ready state is a common Kubernetes TLS issue. This guide covers every root cause — DNS challenges, RBAC, rate limits, and issuer problems — with step-by-step fixes.
CI/CD Pipeline Is Broken: How to Debug and Fix GitHub Actions, Jenkins & ArgoCD Failures (2026)
Your CI/CD pipeline failed and you don't know why. This complete debugging guide covers GitHub Actions, Jenkins, and ArgoCD failures with real error messages and step-by-step fixes.