Kubernetes CronJob Not Running / Missed Schedule Fix (2026)
Kubernetes CronJob missed schedule or not triggering? Here's how to debug and fix CronJob scheduling issues — timezone problems, startingDeadlineSeconds, concurrencyPolicy, and more.
CronJob missed X starting deadline or your CronJob simply not running on schedule — here's the systematic fix.
Common Symptoms
# kubectl describe cronjob my-cronjob
Events:
Missed scheduled time to start a job
# Or the job just never appears
kubectl get jobs -n my-namespace
# No jobs listed despite schedule time passing
# Or only some runs trigger
# Schedule: */5 * * * * but jobs only run every 15 minutes
Step 1: Check CronJob Status
# Check the cronjob itself
kubectl get cronjob my-cronjob -n my-namespace
# Detailed view — look at LAST SCHEDULE and ACTIVE
kubectl describe cronjob my-cronjob -n my-namespace
# Check recent jobs created by the cronjob
kubectl get jobs -n my-namespace --selector=batch.kubernetes.io/controller-uid=<uid>
# Or label-based
kubectl get jobs -n my-namespace -l app=my-cronjobKey fields to check in describe:
Last Schedule Time— when it last ranActive— how many jobs currently runningSchedule— verify the cron expression is correctStarting Deadline Seconds— this causes most missed-schedule issues
Fix 1: startingDeadlineSeconds Too Low
This is the most common cause of missed schedules.
startingDeadlineSeconds defines how late a job is allowed to start. If the controller is busy and misses the exact trigger time by more than this window, the job is skipped.
Also: Kubernetes counts missed schedules in the startingDeadlineSeconds window. If it counts 100+ missed schedules (due to controller downtime or a very short window), it stops scheduling entirely.
apiVersion: batch/v1
kind: CronJob
metadata:
name: my-cronjob
spec:
schedule: "*/5 * * * *"
# Increase this — default is nil (no deadline)
# If set too low (e.g. 10s), controller restarts can cause 100+ missed counts
startingDeadlineSeconds: 300 # 5 minutes window
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: my-job
image: my-image:latest
restartPolicy: OnFailureIf your controller was down for a while and accumulated 100+ missed schedules:
# Force a re-sync by suspending and resuming
kubectl patch cronjob my-cronjob -n my-namespace \
-p '{"spec":{"suspend":true}}'
kubectl patch cronjob my-cronjob -n my-namespace \
-p '{"spec":{"suspend":false}}'Fix 2: Timezone Issues
Kubernetes CronJob schedules run in UTC by default. If your schedule is set for 9am IST but the cluster runs UTC, it'll run at 3:30am UTC — 9am IST.
Check your cluster timezone:
# Check controller-manager timezone
kubectl get pod -n kube-system -l component=kube-controller-manager -o yaml | grep -i tz
# Or check node timezone
kubectl debug node/<node-name> -it --image=busybox -- dateFix — use timeZone field (Kubernetes 1.27+):
spec:
schedule: "0 9 * * 1-5" # 9am
timeZone: "Asia/Kolkata" # IST — job runs at 9am ISTValid timezone strings: use the IANA tz database format (America/New_York, Europe/London, Asia/Kolkata, etc.)
For clusters older than 1.27, set the schedule in UTC and document it clearly.
Fix 3: concurrencyPolicy Blocking Runs
If a previous job is still running and concurrencyPolicy is Forbid, the next scheduled run is skipped entirely.
# Check if old jobs are stuck running
kubectl get jobs -n my-namespace
# Look for jobs with STATUS = Running for too longOptions:
spec:
concurrencyPolicy: Allow # Multiple concurrent runs OK (default)
concurrencyPolicy: Forbid # Skip new run if previous still running
concurrencyPolicy: Replace # Kill the old job, start new oneIf jobs are stuck running forever, they're probably hanging — fix the job itself (add a timeout), not the CronJob policy.
Add a job deadline:
jobTemplate:
spec:
activeDeadlineSeconds: 300 # Kill job after 5 minutes
backoffLimit: 2 # Retry twice before marking failed
template:
...Fix 4: Wrong Cron Expression
Kubernetes uses standard cron syntax but it's easy to get wrong:
# Format: minute hour day-of-month month day-of-week
# ┌─────────────── minute (0–59)
# │ ┌───────────── hour (0–23)
# │ │ ┌─────────── day of month (1–31)
# │ │ │ ┌───────── month (1–12)
# │ │ │ │ ┌─────── day of week (0–6, Sunday=0)
# │ │ │ │ │
* * * * *
# Every 5 minutes
*/5 * * * *
# Every day at 2am UTC
0 2 * * *
# Every Monday at 9am
0 9 * * 1
# Every weekday at 6pm
0 18 * * 1-5
Test your expression at crontab.guru before applying.
# Verify applied schedule
kubectl get cronjob my-cronjob -o jsonpath='{.spec.schedule}'Fix 5: CronJob Controller Not Running
If no CronJobs in the cluster are firing, the kube-controller-manager might be unhealthy:
# Check controller manager
kubectl get pods -n kube-system -l component=kube-controller-manager
# Check logs
kubectl logs -n kube-system -l component=kube-controller-manager --tail=50
# On managed clusters (EKS/GKE/AKS), check the cloud console for control plane healthOn EKS, if the control plane is degraded, CronJobs stop firing. This is visible in the AWS console → EKS → cluster → health.
Fix 6: successfulJobsHistoryLimit Hiding Failures
If jobs ARE running but you can't see them:
spec:
successfulJobsHistoryLimit: 3 # Keep last 3 successful jobs (default: 3)
failedJobsHistoryLimit: 1 # Keep last 1 failed job (default: 1)Increase these for debugging:
kubectl patch cronjob my-cronjob -n my-namespace \
-p '{"spec":{"successfulJobsHistoryLimit":10,"failedJobsHistoryLimit":5}}'Manually Trigger a CronJob for Testing
# Trigger immediately without waiting for schedule
kubectl create job --from=cronjob/my-cronjob manual-test-$(date +%s) -n my-namespace
# Watch it run
kubectl get jobs -n my-namespace -w
# Check logs
kubectl logs -n my-namespace -l job-name=manual-test-xxxxxDebugging Checklist
# 1. Check CronJob status
kubectl describe cronjob my-cronjob -n my-namespace
# 2. Check recent jobs
kubectl get jobs -n my-namespace --sort-by=.metadata.creationTimestamp
# 3. Check job logs (find pod name first)
kubectl get pods -n my-namespace -l app=my-cronjob
kubectl logs -n my-namespace <pod-name>
# 4. Check events
kubectl get events -n my-namespace --field-selector reason=FailedNeedsStart
kubectl get events -n my-namespace --field-selector reason=SawCompletedJob
# 5. Check controller manager
kubectl logs -n kube-system -l component=kube-controller-manager | grep my-cronjobQuick summary:
- Missed schedules after controller restart → increase
startingDeadlineSecondsor suspend/resume - Running at wrong time → add
timeZonefield (K8s 1.27+) - Runs skipped → check
concurrencyPolicyand stuck jobs - Job hangs forever → add
activeDeadlineSecondsto jobTemplate - Nothing firing cluster-wide → check kube-controller-manager health
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
AWS EKS Pods Stuck in Pending State: Causes and Fixes
Pods stuck in Pending on EKS are caused by a handful of known issues — insufficient node capacity, taint mismatches, PVC problems, and more. Here's how to diagnose and fix each one.
AWS EKS Worker Nodes Not Joining the Cluster: Complete Fix Guide
EKS worker nodes stuck in NotReady or not appearing at all? Here are all the causes and step-by-step fixes for node bootstrap failures.
AWS RDS Connection Timeout from EKS Pods — How to Fix It
EKS pods can't connect to RDS? Fix RDS connection timeouts from Kubernetes — covers security groups, VPC peering, subnet routing, and IAM auth issues.