GitHub Actions Job Timeout — Every Fix (2026)
Your GitHub Actions job times out after 6 hours or hits a custom timeout limit. Here's every cause — hung Docker builds, hanging tests, stuck deployments, missing timeout config — and the exact fix.
Your GitHub Actions job just ran for 6 hours and then failed with:
Error: The operation was canceled.
The runner has received a shutdown signal. This can happen when the runner service is stopped,
a manually started runner is cancelled, or the job's timeout is reached.
Or you hit a custom timeout-minutes limit with no clear reason why the job didn't finish.
Here's every cause and the exact fix.
GitHub Actions Default Timeouts
| Scope | Default | Maximum |
|---|---|---|
Job timeout-minutes | 360 minutes (6 hours) | 360 minutes |
Step timeout-minutes | No limit (inherits job) | Job limit |
| Workflow | No explicit limit | Sum of job timeouts |
A job silently hanging for hours before GitHub kills it is the most common scenario. The key is finding what is hanging.
Fix 1: Set Explicit Timeouts to Fail Fast
If you don't set a timeout, GitHub gives you 6 hours. A build that should take 10 minutes can silently waste 6 hours before you notice.
Add timeouts at every level:
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 30 # Job-level — kill if entire job takes > 30 min
steps:
- name: Build Docker image
timeout-minutes: 15 # Step-level — kill if this step takes > 15 min
run: docker build -t myapp:latest .
- name: Run tests
timeout-minutes: 10
run: npm test
- name: Deploy
timeout-minutes: 5
run: kubectl rollout status deployment/myapp --timeout=4mWith explicit timeouts, a hang fails in minutes instead of hours.
Fix 2: Hanging Docker Build
Symptom: docker build step runs forever. No output for extended time.
Cause 1: Build stuck waiting for user input
Some Dockerfiles or build scripts prompt for confirmation:
# BAD — apt-get prompts for confirmation without -y
RUN apt-get install nginx
# Hangs waiting for [Y/n]
# GOOD
RUN apt-get update && apt-get install -y nginxCause 2: COPY or ADD hanging on large files
# BAD — COPYing node_modules (huge, slow)
COPY . . # copies node_modules if no .dockerignore
# GOOD — use .dockerignoreCreate .dockerignore:
node_modules/
.git/
*.log
dist/
.env
Cause 3: Network request in Dockerfile hanging
# Can hang if the URL is slow or unreachable
RUN curl -fsSL https://slow-server.com/install.sh | sh
# Add timeout
RUN curl -fsSL --max-time 60 https://slow-server.com/install.sh | shEnable BuildKit for better output and performance:
- name: Build Docker image
env:
DOCKER_BUILDKIT: 1
run: docker build --progress=plain -t myapp:latest .--progress=plain shows every build step — you can see exactly which layer is hanging.
Fix 3: Test Suite Hanging
Symptom: npm test or pytest runs but never finishes.
Common causes:
- A test that opens a server and never closes it
- A test waiting for a database/service that isn't available
- Jest
--forceExitnot set
# Node.js — add --forceExit and a timeout
- name: Run tests
run: npx jest --forceExit --testTimeout=30000
timeout-minutes: 10
# Python — add timeout to pytest
- name: Run tests
run: pytest --timeout=30 -x
timeout-minutes: 10For integration tests that need services:
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
services:
postgres:
image: postgres:16
env:
POSTGRES_PASSWORD: test
options: >-
--health-cmd pg_isready
--health-interval 5s
--health-timeout 5s
--health-retries 10
steps:
- uses: actions/checkout@v4
- name: Wait for services
run: |
# Explicitly wait instead of hoping services are ready
until pg_isready -h localhost -p 5432; do sleep 1; done
- name: Run tests
run: npm testFix 4: kubectl rollout status Hanging
Symptom: kubectl rollout status deployment/myapp runs forever because the deployment never becomes ready.
Fix — Always add --timeout:
- name: Deploy and verify
run: |
kubectl set image deployment/myapp myapp=$IMAGE_TAG
kubectl rollout status deployment/myapp --timeout=3m
timeout-minutes: 5If the rollout doesn't complete in 3 minutes, the command exits with error (non-zero exit code), failing the step.
Also check why the deployment isn't completing:
- name: Deploy
run: |
kubectl set image deployment/myapp myapp=$IMAGE_TAG
if ! kubectl rollout status deployment/myapp --timeout=3m; then
echo "Deployment failed. Pod status:"
kubectl get pods -l app=myapp
echo "Recent events:"
kubectl describe deployment/myapp | tail -20
exit 1
fiFix 5: SSH/Wait Loop Hanging
Symptom: A step that SSHs into a server or polls a URL runs forever.
# BAD — no timeout on SSH
ssh user@server "deploy.sh"
# GOOD — add ConnectTimeout and ServerAliveInterval
ssh -o ConnectTimeout=10 \
-o ServerAliveInterval=30 \
-o ServerAliveCountMax=3 \
user@server "deploy.sh"For polling loops, always add a maximum iteration count:
# BAD — infinite loop
while ! curl -s http://myapp/health; do sleep 5; done
# GOOD — fail after 2 minutes
ATTEMPTS=0
MAX_ATTEMPTS=24 # 24 * 5s = 120s
until curl -s http://myapp/health | grep -q '"status":"ok"'; do
ATTEMPTS=$((ATTEMPTS+1))
if [ $ATTEMPTS -ge $MAX_ATTEMPTS ]; then
echo "Health check failed after ${MAX_ATTEMPTS} attempts"
exit 1
fi
sleep 5
doneFix 6: Self-Hosted Runner Issues
Symptom: Jobs time out only on self-hosted runners, not GitHub-hosted.
Self-hosted runners can be stuck, have resource constraints, or have stale processes from previous runs.
# Add cleanup step at the start of every job
jobs:
build:
runs-on: self-hosted
steps:
- name: Cleanup runner
run: |
docker system prune -f --volumes
rm -rf $GITHUB_WORKSPACE/*
- uses: actions/checkout@v4Check runner health:
# On the runner machine
systemctl status actions.runner.*
journalctl -u actions.runner.* -n 100Restart the runner service if it's stuck:
systemctl restart actions.runner.myorg.myrepoFix 7: npm install / pip install Hanging
Cause: Package registry rate limiting or connectivity issue.
# npm — add timeout and registry
- name: Install dependencies
run: npm ci --prefer-offline
timeout-minutes: 5
env:
NPM_CONFIG_REGISTRY: https://registry.npmjs.org
# pip — add timeout
- name: Install Python dependencies
run: pip install --timeout 60 -r requirements.txt
timeout-minutes: 5For npm ci, use caching to avoid downloading packages on every run:
- uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
- run: npm ciDiagnosing a Hanging Job
Add debug output to find where it's hanging:
- name: Debug — show running processes
if: always() # Run even if previous steps failed
run: |
ps aux
df -h
free -h
docker psOr use tmate to SSH into a hanging runner for live debugging:
- name: Setup tmate session on failure
uses: mxschmitt/action-tmate@v3
if: failure()
timeout-minutes: 15This opens an SSH tunnel so you can interactively debug the runner when a job fails.
Summary Checklist
jobs:
build:
timeout-minutes: 30 # Always set job timeout
steps:
- run: docker build ...
timeout-minutes: 15 # Timeout each slow step
env:
DOCKER_BUILDKIT: 1 # Better Docker output
- run: npm test -- --forceExit # Force test runner to exit
timeout-minutes: 10
- run: |
kubectl rollout status ... --timeout=3m # K8s timeout
timeout-minutes: 5
- name: Debug on failure
if: failure() # Dump state when things go wrong
run: kubectl describe podsSetting explicit timeouts at both job and step level is the single highest-ROI change — it turns 6-hour mystery hangs into fast, obvious failures.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
GitHub Actions Docker Push: Permission Denied / Unauthorized Fix (2026)
Getting 'permission denied' or 'unauthorized: authentication required' when pushing Docker images in GitHub Actions? Here are all the causes and fixes.
GitHub Actions 'No Space Left on Device': How to Fix Runner Disk Issues
GitHub Actions failing with 'no space left on device'? Here's how to free disk space on runners, optimize Docker builds, and handle large monorepos.
GitHub Actions vs GitLab CI vs CircleCI — Which One Should You Use in 2026?
Comparing the three most popular CI/CD platforms head-to-head: features, pricing, speed, and when to pick each one in 2026.