All Articles

GitHub Actions Job Timeout — Every Fix (2026)

Your GitHub Actions job times out after 6 hours or hits a custom timeout limit. Here's every cause — hung Docker builds, hanging tests, stuck deployments, missing timeout config — and the exact fix.

DevOpsBoysApr 27, 20266 min read
Share:Tweet

Your GitHub Actions job just ran for 6 hours and then failed with:

Error: The operation was canceled.
The runner has received a shutdown signal. This can happen when the runner service is stopped, 
a manually started runner is cancelled, or the job's timeout is reached.

Or you hit a custom timeout-minutes limit with no clear reason why the job didn't finish.

Here's every cause and the exact fix.


GitHub Actions Default Timeouts

ScopeDefaultMaximum
Job timeout-minutes360 minutes (6 hours)360 minutes
Step timeout-minutesNo limit (inherits job)Job limit
WorkflowNo explicit limitSum of job timeouts

A job silently hanging for hours before GitHub kills it is the most common scenario. The key is finding what is hanging.


Fix 1: Set Explicit Timeouts to Fail Fast

If you don't set a timeout, GitHub gives you 6 hours. A build that should take 10 minutes can silently waste 6 hours before you notice.

Add timeouts at every level:

yaml
jobs:
  build:
    runs-on: ubuntu-latest
    timeout-minutes: 30   # Job-level — kill if entire job takes > 30 min
    steps:
    - name: Build Docker image
      timeout-minutes: 15  # Step-level — kill if this step takes > 15 min
      run: docker build -t myapp:latest .
 
    - name: Run tests
      timeout-minutes: 10
      run: npm test
 
    - name: Deploy
      timeout-minutes: 5
      run: kubectl rollout status deployment/myapp --timeout=4m

With explicit timeouts, a hang fails in minutes instead of hours.


Fix 2: Hanging Docker Build

Symptom: docker build step runs forever. No output for extended time.

Cause 1: Build stuck waiting for user input

Some Dockerfiles or build scripts prompt for confirmation:

dockerfile
# BAD — apt-get prompts for confirmation without -y
RUN apt-get install nginx
# Hangs waiting for [Y/n]
 
# GOOD
RUN apt-get update && apt-get install -y nginx

Cause 2: COPY or ADD hanging on large files

dockerfile
# BAD — COPYing node_modules (huge, slow)
COPY . .  # copies node_modules if no .dockerignore
 
# GOOD — use .dockerignore

Create .dockerignore:

node_modules/
.git/
*.log
dist/
.env

Cause 3: Network request in Dockerfile hanging

dockerfile
# Can hang if the URL is slow or unreachable
RUN curl -fsSL https://slow-server.com/install.sh | sh
 
# Add timeout
RUN curl -fsSL --max-time 60 https://slow-server.com/install.sh | sh

Enable BuildKit for better output and performance:

yaml
- name: Build Docker image
  env:
    DOCKER_BUILDKIT: 1
  run: docker build --progress=plain -t myapp:latest .

--progress=plain shows every build step — you can see exactly which layer is hanging.


Fix 3: Test Suite Hanging

Symptom: npm test or pytest runs but never finishes.

Common causes:

  • A test that opens a server and never closes it
  • A test waiting for a database/service that isn't available
  • Jest --forceExit not set
yaml
# Node.js — add --forceExit and a timeout
- name: Run tests
  run: npx jest --forceExit --testTimeout=30000
  timeout-minutes: 10
 
# Python — add timeout to pytest
- name: Run tests
  run: pytest --timeout=30 -x
  timeout-minutes: 10

For integration tests that need services:

yaml
jobs:
  test:
    runs-on: ubuntu-latest
    timeout-minutes: 15
 
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: test
        options: >-
          --health-cmd pg_isready
          --health-interval 5s
          --health-timeout 5s
          --health-retries 10
 
    steps:
    - uses: actions/checkout@v4
    - name: Wait for services
      run: |
        # Explicitly wait instead of hoping services are ready
        until pg_isready -h localhost -p 5432; do sleep 1; done
    - name: Run tests
      run: npm test

Fix 4: kubectl rollout status Hanging

Symptom: kubectl rollout status deployment/myapp runs forever because the deployment never becomes ready.

Fix — Always add --timeout:

yaml
- name: Deploy and verify
  run: |
    kubectl set image deployment/myapp myapp=$IMAGE_TAG
    kubectl rollout status deployment/myapp --timeout=3m
  timeout-minutes: 5

If the rollout doesn't complete in 3 minutes, the command exits with error (non-zero exit code), failing the step.

Also check why the deployment isn't completing:

yaml
- name: Deploy
  run: |
    kubectl set image deployment/myapp myapp=$IMAGE_TAG
    if ! kubectl rollout status deployment/myapp --timeout=3m; then
      echo "Deployment failed. Pod status:"
      kubectl get pods -l app=myapp
      echo "Recent events:"
      kubectl describe deployment/myapp | tail -20
      exit 1
    fi

Fix 5: SSH/Wait Loop Hanging

Symptom: A step that SSHs into a server or polls a URL runs forever.

bash
# BAD — no timeout on SSH
ssh user@server "deploy.sh"
 
# GOOD — add ConnectTimeout and ServerAliveInterval
ssh -o ConnectTimeout=10 \
    -o ServerAliveInterval=30 \
    -o ServerAliveCountMax=3 \
    user@server "deploy.sh"

For polling loops, always add a maximum iteration count:

bash
# BAD — infinite loop
while ! curl -s http://myapp/health; do sleep 5; done
 
# GOOD — fail after 2 minutes
ATTEMPTS=0
MAX_ATTEMPTS=24  # 24 * 5s = 120s
until curl -s http://myapp/health | grep -q '"status":"ok"'; do
  ATTEMPTS=$((ATTEMPTS+1))
  if [ $ATTEMPTS -ge $MAX_ATTEMPTS ]; then
    echo "Health check failed after ${MAX_ATTEMPTS} attempts"
    exit 1
  fi
  sleep 5
done

Fix 6: Self-Hosted Runner Issues

Symptom: Jobs time out only on self-hosted runners, not GitHub-hosted.

Self-hosted runners can be stuck, have resource constraints, or have stale processes from previous runs.

yaml
# Add cleanup step at the start of every job
jobs:
  build:
    runs-on: self-hosted
    steps:
    - name: Cleanup runner
      run: |
        docker system prune -f --volumes
        rm -rf $GITHUB_WORKSPACE/*
    
    - uses: actions/checkout@v4

Check runner health:

bash
# On the runner machine
systemctl status actions.runner.*
journalctl -u actions.runner.* -n 100

Restart the runner service if it's stuck:

bash
systemctl restart actions.runner.myorg.myrepo

Fix 7: npm install / pip install Hanging

Cause: Package registry rate limiting or connectivity issue.

yaml
# npm — add timeout and registry
- name: Install dependencies
  run: npm ci --prefer-offline
  timeout-minutes: 5
  env:
    NPM_CONFIG_REGISTRY: https://registry.npmjs.org
 
# pip — add timeout
- name: Install Python dependencies
  run: pip install --timeout 60 -r requirements.txt
  timeout-minutes: 5

For npm ci, use caching to avoid downloading packages on every run:

yaml
- uses: actions/cache@v4
  with:
    path: ~/.npm
    key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
 
- run: npm ci

Diagnosing a Hanging Job

Add debug output to find where it's hanging:

yaml
- name: Debug — show running processes
  if: always()   # Run even if previous steps failed
  run: |
    ps aux
    df -h
    free -h
    docker ps

Or use tmate to SSH into a hanging runner for live debugging:

yaml
- name: Setup tmate session on failure
  uses: mxschmitt/action-tmate@v3
  if: failure()
  timeout-minutes: 15

This opens an SSH tunnel so you can interactively debug the runner when a job fails.


Summary Checklist

yaml
jobs:
  build:
    timeout-minutes: 30           # Always set job timeout
    steps:
    - run: docker build ...
      timeout-minutes: 15         # Timeout each slow step
      env:
        DOCKER_BUILDKIT: 1        # Better Docker output
    
    - run: npm test -- --forceExit  # Force test runner to exit
      timeout-minutes: 10
    
    - run: |
        kubectl rollout status ... --timeout=3m  # K8s timeout
      timeout-minutes: 5
    
    - name: Debug on failure
      if: failure()               # Dump state when things go wrong
      run: kubectl describe pods

Setting explicit timeouts at both job and step level is the single highest-ROI change — it turns 6-hour mystery hangs into fast, obvious failures.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments