Terraform Error Acquiring the State Lock: Causes and Fix
Terraform state lock errors can block your entire team. Learn why they happen, how to safely unlock state, and how to prevent lock conflicts for good.
You run terraform apply and instead of seeing a plan, you get this:
╷
│ Error: Error acquiring the state lock
│
│ Error message: ConditionalCheckFailedException: The conditional request failed
│ Lock Info:
│ ID: 8f3d2a1b-4c5e-6f7a-8b9c-0d1e2f3a4b5c
│ Path: my-project/terraform.tfstate
│ Operation: OperationTypeApply
│ Who: ci-runner@github-actions
│ Version: 1.6.0
│ Created: 2026-03-13 08:45:12.123456789 +0000 UTC
│ Info:
╵
Your whole team is blocked. No one can run terraform plan or apply. This guide explains what state locking is, why it gets stuck, and how to fix it safely without corrupting your infrastructure state.
Why Terraform Locks State
Terraform uses state files to track what infrastructure it manages. When two people (or two CI jobs) run Terraform simultaneously against the same state file, they can overwrite each other's changes and corrupt the state — leaving your infrastructure in an undefined, unrecoverable mess.
State locking prevents this. Before making any changes, Terraform acquires an exclusive lock on the state backend. Other Terraform processes see the lock and wait (or fail immediately with the error above).
When using S3 + DynamoDB as a backend:
- The S3 bucket stores the actual
.tfstatefile - The DynamoDB table handles the lock — a record is written with the lock ID when a run starts and deleted when it ends
The problem is: Terraform doesn't always clean up the lock. If a process is killed, crashes, loses network connection, or times out mid-run, the lock record stays in DynamoDB forever. No automatic cleanup. Every future terraform command fails until you manually remove it.
Step 1: Confirm the Lock Is Stale
Before unlocking, make sure the lock is actually stale — not from a real ongoing operation.
Check who holds the lock from the error message:
Who: ci-runner@github-actions
Created: 2026-03-13 08:45:12
Now verify:
-
Is that CI job still running? Check your GitHub Actions, GitLab CI, or Jenkins for active runs. If the job is finished (succeeded, failed, or cancelled), the lock is definitely stale.
-
Is that team member actively running Terraform? Ask the person named in the
Whofield. If they're not, the lock is stale. -
How old is the lock? Locks older than 30-60 minutes for a normal
applyare almost certainly stale unless you're running a very large Terraform plan.
If any active operation IS running, do not unlock — wait for it to finish.
Step 2: Force-Unlock the State
Once you've confirmed the lock is stale, use Terraform's built-in unlock command. Copy the lock ID from the error message:
terraform force-unlock 8f3d2a1b-4c5e-6f7a-8b9c-0d1e2f3a4b5cTerraform will ask for confirmation:
Do you really want to force-unlock?
Terraform will remove the lock on the remote state.
This will allow local Terraform commands to modify this state, even though it
may still be used. Only 'yes' will be accepted to confirm.
Enter a value: yes
Terraform state has been successfully unlocked!
Now run your terraform plan or apply as normal.
Step 3: If force-unlock Fails
Sometimes terraform force-unlock itself fails — usually because your AWS credentials don't have DynamoDB write permission, or the lock entry is malformed.
In this case, delete the DynamoDB lock entry directly:
# Find your DynamoDB table name (it's in your backend config)
# Usually something like "terraform-locks" or "my-project-tfstate-lock"
# Delete the lock record directly
aws dynamodb delete-item \
--table-name terraform-locks \
--key '{"LockID": {"S": "my-project/terraform.tfstate"}}' \
--region us-east-1The key format is usually <s3-path-to-statefile> — the path inside your S3 bucket where the .tfstate lives.
Verify the lock is gone:
aws dynamodb get-item \
--table-name terraform-locks \
--key '{"LockID": {"S": "my-project/terraform.tfstate"}}' \
--region us-east-1If it returns empty/null, the lock is cleared.
Root Causes: Why Locks Get Stuck
1. CI job cancelled mid-run
The most common cause. Someone clicks "Cancel" on a GitHub Actions or GitLab CI pipeline while terraform apply is running. The job is killed but Terraform doesn't get a chance to release the lock.
2. Network interruption
The machine running Terraform lost its connection to AWS mid-apply. The process died but the DynamoDB lock entry stayed.
3. Terraform process killed with SIGKILL
Running kill -9 on a Terraform process (or an OOM killer on a resource-starved machine) prevents Terraform from running its cleanup code.
4. Terraform crashed
Rare but it happens — especially with provider bugs or very large state files. Terraform panics, crashes, and leaves a lock behind.
Preventing State Lock Issues
Add timeout to CI pipelines:
Instead of cancelling jobs, let Terraform finish or time out naturally. Configure a pipeline-level timeout:
# GitHub Actions
jobs:
terraform:
timeout-minutes: 30 # Force-kill after 30 min, Terraform gets cleanup time
steps:
- run: terraform apply -auto-approveUse -lock-timeout in Terraform:
Instead of immediately failing when a lock is held, wait for a period:
terraform apply -lock-timeout=10mThis is useful in CI where two jobs occasionally overlap — the second job waits 10 minutes before giving up.
Terraform Cloud / Atlantis for team workflows:
If your team frequently hits lock conflicts, you need a proper workflow manager. Terraform Cloud and Atlantis serialize Terraform operations so only one runs at a time, preventing races entirely.
# Atlantis config example (atlantis.yaml)
version: 3
projects:
- name: my-project
dir: .
workspace: production
autoplan:
when_modified: ["*.tf"]Add a pre-destroy hook in CI to unlock:
If your CI frequently gets cancelled, add a cleanup step:
# GitHub Actions — cleanup on cancellation
jobs:
terraform:
steps:
- name: Terraform Apply
run: terraform apply -auto-approve
- name: Force unlock on failure
if: cancelled()
run: |
LOCK_ID=$(terraform show -json 2>/dev/null | jq -r '.values.root_module.resources[] | select(.type == "null")' || echo "")
# Better: store lock ID and clean up
echo "Pipeline cancelled — check for stale locks"Backend Config Reference
If you're setting up S3 + DynamoDB backend for the first time, here's the minimal working config:
# Create the DynamoDB table first (one-time setup)
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
# Then use it in your backend config
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "production/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}Quick Reference
# Unlock using Terraform
terraform force-unlock <LOCK_ID>
# Unlock directly via AWS CLI
aws dynamodb delete-item \
--table-name terraform-locks \
--key '{"LockID": {"S": "path/to/terraform.tfstate"}}'
# List current lock entries
aws dynamodb scan --table-name terraform-locks
# Apply with lock timeout (waits instead of failing)
terraform apply -lock-timeout=5m
# Disable locking (DANGEROUS — only for debugging)
terraform apply -lock=falseNever use
-lock=falsein production or with teammates — it bypasses all safety and can cause state corruption.
Going Deeper on Terraform
Terraform state management, workspaces, and remote backends are intermediate-to-advanced topics. If you want a solid foundation — including hands-on labs where you practice with real AWS accounts — check out the KodeKloud Terraform course:
👉 Master Terraform at KodeKloud
State lock errors are annoying but completely preventable. Once you understand why they happen, they stop being a mystery and start being a 30-second fix.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
AWS ALB Showing Unhealthy Targets — How to Fix It
Fix AWS Application Load Balancer unhealthy targets. Covers health check misconfigurations, security group issues, target group problems, and EKS-specific ALB controller debugging.
AWS EKS Pods Stuck in Pending State: Causes and Fixes
Pods stuck in Pending on EKS are caused by a handful of known issues — insufficient node capacity, taint mismatches, PVC problems, and more. Here's how to diagnose and fix each one.
AWS IAM Permission Denied Errors — How to Fix Every Variant (2026)
Getting 'Access Denied' or 'is not authorized to perform' errors in AWS? Here's how to diagnose and fix every IAM permission issue — EC2, EKS, Lambda, S3, and CLI.