EKS Fargate Pod Not Scheduling — Causes and Fixes (2026)
Pods stuck in Pending on EKS Fargate? Here are the 8 most common reasons Fargate pods won't schedule and exactly how to fix each one.
EKS Fargate is serverless Kubernetes — no nodes to manage. But when pods get stuck in Pending with cryptic errors, debugging feels harder than regular EC2 nodes. Here are the real causes and fixes.
How Fargate Scheduling Works
Fargate doesn't use EC2 nodes. Instead, AWS provisions a micro-VM per pod that matches a Fargate Profile. If no profile matches the pod, it never schedules.
Pod created
↓
EKS checks Fargate profiles
↓
Profile matches? → Fargate VM provisioned → Pod runs
No match? → Pod stays Pending forever
Cause 1: Fargate Profile Doesn't Match Namespace
The most common reason. Fargate profiles are namespace-scoped.
Symptom:
kubectl describe pod my-pod -n my-namespace
# Events: 0/0 nodes are available: Scheduling is disabled on all nodes.Fix:
# Check what profiles exist
aws eks list-fargate-profiles --cluster-name my-cluster
# Check which namespaces a profile covers
aws eks describe-fargate-profile \
--cluster-name my-cluster \
--fargate-profile-name my-profile
# Output shows:
# "selectors": [{"namespace": "production"}]
# ← if your pod is in "prod", it won't matchCreate a profile for the correct namespace:
aws eks create-fargate-profile \
--cluster-name my-cluster \
--fargate-profile-name prod-profile \
--pod-execution-role-arn arn:aws:iam::123456789:role/AmazonEKSFargatePodExecutionRole \
--selectors namespace=prodCause 2: Label Selector Mismatch
Fargate profiles can filter by labels too. If your profile requires a label the pod doesn't have — no scheduling.
Check profile selectors:
aws eks describe-fargate-profile \
--cluster-name my-cluster \
--fargate-profile-name my-profile \
--query 'fargateProfile.selectors'
# Output:
# [{"namespace": "production", "labels": {"fargate": "true"}}]Fix: Add the required label to your pod spec:
metadata:
labels:
fargate: "true" # ← must match profile selectorOr remove the label requirement from the Fargate profile if it's not needed.
Cause 3: Missing Pod Execution IAM Role
Fargate needs an IAM role to pull images, write logs, and run the pod.
Symptom:
Pod stuck in Pending. No events. Profile exists.
Check:
aws eks describe-fargate-profile \
--cluster-name my-cluster \
--fargate-profile-name my-profile \
--query 'fargateProfile.podExecutionRoleArn'Create the role if missing:
# Trust policy
cat > fargate-trust.json << 'EOF'
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "eks-fargate-pods.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}
EOF
aws iam create-role \
--role-name AmazonEKSFargatePodExecutionRole \
--assume-role-policy-document file://fargate-trust.json
aws iam attach-role-policy \
--role-name AmazonEKSFargatePodExecutionRole \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicyCause 4: Resource Request Exceeds Fargate Limits
Fargate has fixed vCPU/memory tiers. If your request doesn't match a valid combination, the pod won't schedule.
Valid Fargate CPU/Memory combinations:
| vCPU | Memory options |
|---|---|
| 0.25 | 0.5 GB, 1 GB, 2 GB |
| 0.5 | 1 GB – 4 GB (1 GB increments) |
| 1 | 2 GB – 8 GB (1 GB increments) |
| 2 | 4 GB – 16 GB (1 GB increments) |
| 4 | 8 GB – 30 GB (1 GB increments) |
| 8 | 16 GB – 60 GB (4 GB increments) |
| 16 | 32 GB – 120 GB (8 GB increments) |
Check your pod requests:
kubectl get pod my-pod -o jsonpath='{.spec.containers[*].resources}'Fix: Use valid combinations:
resources:
requests:
cpu: "0.5"
memory: "1Gi" # valid: 0.5 vCPU + 1 GB
limits:
cpu: "0.5"
memory: "1Gi"Cause 5: Fargate Profile Is Being Created/Deleted
Profiles have a status. If it's not ACTIVE, pods won't schedule.
aws eks describe-fargate-profile \
--cluster-name my-cluster \
--fargate-profile-name my-profile \
--query 'fargateProfile.status'
# CREATING / DELETING / ACTIVE / CREATE_FAILED / DELETE_FAILEDWait for ACTIVE or fix a failed profile:
# Delete failed profile and recreate
aws eks delete-fargate-profile \
--cluster-name my-cluster \
--fargate-profile-name my-profileCause 6: kube-system Pods on Fargate (CoreDNS)
If you added kube-system to a Fargate profile, CoreDNS needs an annotation to run on Fargate.
# Check CoreDNS deployment
kubectl describe deployment coredns -n kube-system | grep eks.amazonaws
# If missing, patch it:
kubectl patch deployment coredns \
-n kube-system \
--type json \
-p='[{"op":"remove","path":"/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'Cause 7: DaemonSets Don't Work on Fargate
Fargate doesn't support DaemonSets — there are no nodes. Each Fargate pod is its own VM.
kubectl describe pod my-daemonset-pod -n my-namespace
# Warning: DaemonSet pods cannot be scheduled on FargateFix: Use a sidecar pattern instead, or run DaemonSet components as regular pods.
Cause 8: Subnet Has No Available IPs
Fargate assigns an ENI per pod. If your VPC subnet is exhausted, pods can't start.
# Check available IPs in subnets
aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=vpc-12345" \
--query 'Subnets[*].{Subnet:SubnetId,Available:AvailableIpAddressCount}'Fix: Use /24 or larger subnets for Fargate, or add new subnets to the VPC.
Debug Checklist
# 1. Check pod events
kubectl describe pod <pod-name> -n <namespace>
# 2. Check profile status
aws eks list-fargate-profiles --cluster-name <cluster>
aws eks describe-fargate-profile --cluster-name <cluster> --fargate-profile-name <profile>
# 3. Check profile matches pod namespace + labels
kubectl get pod <pod-name> -n <namespace> --show-labels
# 4. Check resource requests are valid
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].resources}'
# 5. Check subnet IPs
aws ec2 describe-subnets --query 'Subnets[*].{ID:SubnetId,Free:AvailableIpAddressCount}'Resources
- KodeKloud AWS EKS Course — hands-on Fargate labs
- AWS EKS Workshop — Fargate deep dive
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
AWS EKS Pods Stuck in Pending State: Causes and Fixes
Pods stuck in Pending on EKS are caused by a handful of known issues — insufficient node capacity, taint mismatches, PVC problems, and more. Here's how to diagnose and fix each one.
AWS EKS vs Google GKE vs Azure AKS — Which Managed Kubernetes to Use in 2026?
Honest comparison of EKS, GKE, and AKS in 2026: pricing, developer experience, networking, autoscaling, and which one to pick for your use case.
Build a Complete AWS Infrastructure with Terraform from Scratch (2026)
Full project walkthrough: provision a production-grade AWS VPC, EKS cluster, RDS, S3, and IAM with Terraform. Real code, real architecture, ready to use.