🎉 DevOps Interview Prep Bundle is live — 1000+ Q&A across 20 topicsGet it →
All Articles

Kubernetes CronJob Not Running / Missed Schedule Fix (2026)

Kubernetes CronJob missed schedule or not triggering? Here's how to debug and fix CronJob scheduling issues — timezone problems, startingDeadlineSeconds, concurrencyPolicy, and more.

DevOpsBoysMay 5, 20265 min read
Share:Tweet

CronJob missed X starting deadline or your CronJob simply not running on schedule — here's the systematic fix.


Common Symptoms

# kubectl describe cronjob my-cronjob
Events:
  Missed scheduled time to start a job

# Or the job just never appears
kubectl get jobs -n my-namespace
# No jobs listed despite schedule time passing

# Or only some runs trigger
# Schedule: */5 * * * * but jobs only run every 15 minutes

Step 1: Check CronJob Status

bash
# Check the cronjob itself
kubectl get cronjob my-cronjob -n my-namespace
 
# Detailed view — look at LAST SCHEDULE and ACTIVE
kubectl describe cronjob my-cronjob -n my-namespace
 
# Check recent jobs created by the cronjob
kubectl get jobs -n my-namespace --selector=batch.kubernetes.io/controller-uid=<uid>
 
# Or label-based
kubectl get jobs -n my-namespace -l app=my-cronjob

Key fields to check in describe:

  • Last Schedule Time — when it last ran
  • Active — how many jobs currently running
  • Schedule — verify the cron expression is correct
  • Starting Deadline Seconds — this causes most missed-schedule issues

Fix 1: startingDeadlineSeconds Too Low

This is the most common cause of missed schedules.

startingDeadlineSeconds defines how late a job is allowed to start. If the controller is busy and misses the exact trigger time by more than this window, the job is skipped.

Also: Kubernetes counts missed schedules in the startingDeadlineSeconds window. If it counts 100+ missed schedules (due to controller downtime or a very short window), it stops scheduling entirely.

yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: my-cronjob
spec:
  schedule: "*/5 * * * *"
  # Increase this — default is nil (no deadline)
  # If set too low (e.g. 10s), controller restarts can cause 100+ missed counts
  startingDeadlineSeconds: 300  # 5 minutes window
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: my-job
            image: my-image:latest
          restartPolicy: OnFailure

If your controller was down for a while and accumulated 100+ missed schedules:

bash
# Force a re-sync by suspending and resuming
kubectl patch cronjob my-cronjob -n my-namespace \
  -p '{"spec":{"suspend":true}}'
 
kubectl patch cronjob my-cronjob -n my-namespace \
  -p '{"spec":{"suspend":false}}'

Fix 2: Timezone Issues

Kubernetes CronJob schedules run in UTC by default. If your schedule is set for 9am IST but the cluster runs UTC, it'll run at 3:30am UTC — 9am IST.

Check your cluster timezone:

bash
# Check controller-manager timezone
kubectl get pod -n kube-system -l component=kube-controller-manager -o yaml | grep -i tz
 
# Or check node timezone
kubectl debug node/<node-name> -it --image=busybox -- date

Fix — use timeZone field (Kubernetes 1.27+):

yaml
spec:
  schedule: "0 9 * * 1-5"  # 9am
  timeZone: "Asia/Kolkata"  # IST — job runs at 9am IST

Valid timezone strings: use the IANA tz database format (America/New_York, Europe/London, Asia/Kolkata, etc.)

For clusters older than 1.27, set the schedule in UTC and document it clearly.


Fix 3: concurrencyPolicy Blocking Runs

If a previous job is still running and concurrencyPolicy is Forbid, the next scheduled run is skipped entirely.

bash
# Check if old jobs are stuck running
kubectl get jobs -n my-namespace
# Look for jobs with STATUS = Running for too long

Options:

yaml
spec:
  concurrencyPolicy: Allow   # Multiple concurrent runs OK (default)
  concurrencyPolicy: Forbid  # Skip new run if previous still running
  concurrencyPolicy: Replace # Kill the old job, start new one

If jobs are stuck running forever, they're probably hanging — fix the job itself (add a timeout), not the CronJob policy.

Add a job deadline:

yaml
jobTemplate:
  spec:
    activeDeadlineSeconds: 300  # Kill job after 5 minutes
    backoffLimit: 2             # Retry twice before marking failed
    template:
      ...

Fix 4: Wrong Cron Expression

Kubernetes uses standard cron syntax but it's easy to get wrong:

# Format: minute hour day-of-month month day-of-week
# ┌─────────────── minute (0–59)
# │ ┌───────────── hour (0–23)
# │ │ ┌─────────── day of month (1–31)
# │ │ │ ┌───────── month (1–12)
# │ │ │ │ ┌─────── day of week (0–6, Sunday=0)
# │ │ │ │ │
  * * * * *

# Every 5 minutes
*/5 * * * *

# Every day at 2am UTC
0 2 * * *

# Every Monday at 9am
0 9 * * 1

# Every weekday at 6pm
0 18 * * 1-5

Test your expression at crontab.guru before applying.

bash
# Verify applied schedule
kubectl get cronjob my-cronjob -o jsonpath='{.spec.schedule}'

Fix 5: CronJob Controller Not Running

If no CronJobs in the cluster are firing, the kube-controller-manager might be unhealthy:

bash
# Check controller manager
kubectl get pods -n kube-system -l component=kube-controller-manager
 
# Check logs
kubectl logs -n kube-system -l component=kube-controller-manager --tail=50
 
# On managed clusters (EKS/GKE/AKS), check the cloud console for control plane health

On EKS, if the control plane is degraded, CronJobs stop firing. This is visible in the AWS console → EKS → cluster → health.


Fix 6: successfulJobsHistoryLimit Hiding Failures

If jobs ARE running but you can't see them:

yaml
spec:
  successfulJobsHistoryLimit: 3   # Keep last 3 successful jobs (default: 3)
  failedJobsHistoryLimit: 1       # Keep last 1 failed job (default: 1)

Increase these for debugging:

bash
kubectl patch cronjob my-cronjob -n my-namespace \
  -p '{"spec":{"successfulJobsHistoryLimit":10,"failedJobsHistoryLimit":5}}'

Manually Trigger a CronJob for Testing

bash
# Trigger immediately without waiting for schedule
kubectl create job --from=cronjob/my-cronjob manual-test-$(date +%s) -n my-namespace
 
# Watch it run
kubectl get jobs -n my-namespace -w
 
# Check logs
kubectl logs -n my-namespace -l job-name=manual-test-xxxxx

Debugging Checklist

bash
# 1. Check CronJob status
kubectl describe cronjob my-cronjob -n my-namespace
 
# 2. Check recent jobs
kubectl get jobs -n my-namespace --sort-by=.metadata.creationTimestamp
 
# 3. Check job logs (find pod name first)
kubectl get pods -n my-namespace -l app=my-cronjob
kubectl logs -n my-namespace <pod-name>
 
# 4. Check events
kubectl get events -n my-namespace --field-selector reason=FailedNeedsStart
kubectl get events -n my-namespace --field-selector reason=SawCompletedJob
 
# 5. Check controller manager
kubectl logs -n kube-system -l component=kube-controller-manager | grep my-cronjob

Quick summary:

  • Missed schedules after controller restart → increase startingDeadlineSeconds or suspend/resume
  • Running at wrong time → add timeZone field (K8s 1.27+)
  • Runs skipped → check concurrencyPolicy and stuck jobs
  • Job hangs forever → add activeDeadlineSeconds to jobTemplate
  • Nothing firing cluster-wide → check kube-controller-manager health
Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments