How to Set Up Terraform Remote State with S3 and DynamoDB (Step by Step)
Storing Terraform state locally breaks team workflows and risks data loss. This guide shows you exactly how to configure remote state with S3 and DynamoDB locking — the production standard setup.
Every Terraform tutorial starts the same way. You run terraform init, it creates a terraform.tfstate file in your current directory, and everything works fine. Then you join a team, or you run Terraform from a CI/CD pipeline, and everything breaks.
Two engineers run terraform apply at the same time. One overwrites the other's state. Or the state file lives on someone's laptop and that person leaves the team. Or the file gets accidentally deleted. These aren't theoretical risks — they're the actual ways teams lose production infrastructure state.
The S3 + DynamoDB remote state setup is the production standard for a reason. It solves all three problems simultaneously: state is stored centrally in S3 (versioned, encrypted, durable), and DynamoDB provides a locking mechanism that prevents two operations from running simultaneously. This guide walks through every step to set it up correctly.
Why Local State Is a Problem
Before jumping into the setup, it's worth understanding exactly what can go wrong with local state — because the "why" shapes the "how."
Problem 1: No team collaboration. Terraform state is the source of truth about what infrastructure exists. If it lives on your laptop, your teammates can't run terraform plan without getting errors about unknown resources. Every team member needs access to the same state file.
Problem 2: No concurrent access control. If two people run terraform apply at the same moment against the same infrastructure, both processes read the same state, make changes, and then both try to write their updated state. The second write silently overwrites the first. Resources get orphaned, deleted, or created twice.
Problem 3: No disaster recovery. A local state file has whatever durability properties your laptop has. Which is to say, none. One rm -rf, one stolen laptop, one accidental git clean, and your state is gone. Recreating it from scratch means importing every resource manually — an extremely painful process.
Remote state in S3 solves problems 1 and 3 immediately. DynamoDB locking solves problem 2.
Step 1: Create the S3 Bucket
The S3 bucket stores your state files. You want three properties: versioning (so you can recover from accidental state corruption), server-side encryption (state files contain sensitive data — resource IDs, IPs, sometimes credentials), and block public access.
You can create this bucket in Terraform itself — but there's a chicken-and-egg problem: you can't use remote state to manage the bucket that stores your remote state. Create it with the AWS CLI or a separate bootstrap Terraform configuration that uses local state.
# Create the bucket
aws s3api create-bucket \
--bucket your-company-terraform-state \
--region us-east-1
# Enable versioning
aws s3api put-bucket-versioning \
--bucket your-company-terraform-state \
--versioning-configuration Status=Enabled
# Enable server-side encryption (AES256)
aws s3api put-bucket-encryption \
--bucket your-company-terraform-state \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
# Block all public access
aws s3api put-public-access-block \
--bucket your-company-terraform-state \
--public-access-block-configuration \
"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"Choose a bucket name that's globally unique and descriptive. A pattern like {company}-{project}-terraform-state-{account-id} works well.
Step 2: Create the DynamoDB Table for Locking
DynamoDB state locking works by creating a lock record (an item in the table) when Terraform starts an operation. If another Terraform process tries to start and finds an existing lock, it waits or errors out rather than proceeding.
The table needs exactly one attribute: LockID as a string, as the hash key. Nothing else.
aws dynamodb create-table \
--table-name terraform-state-locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--region us-east-1PAY_PER_REQUEST billing mode is almost always the right choice for a lock table — the traffic is extremely low (just lock/unlock operations during Terraform runs), and you don't want to provision capacity for something that runs a few times a day at most.
Step 3: Configure the Terraform Backend
Now add the backend configuration to your Terraform code. This goes in a backend.tf file (or at the top of your main.tf) in the Terraform configuration you want to use remote state:
# backend.tf
terraform {
backend "s3" {
bucket = "your-company-terraform-state"
key = "prod/vpc/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-locks"
}
}The key is the path within the S3 bucket where this state file will be stored. Use a meaningful path that reflects your project structure. A common pattern:
{environment}/{component}/terraform.tfstate
# Examples:
prod/vpc/terraform.tfstate
prod/eks-cluster/terraform.tfstate
staging/rds/terraform.tfstate
This lets you manage multiple components with separate state files (which you should — monolithic state files become a performance and blast-radius problem at scale) while keeping everything in one bucket.
Important: Backend configuration cannot use Terraform variables or expressions. The bucket, key, region, and dynamodb_table values must be literal strings. This is a common gotcha that trips up first-time remote state setups.
Step 4: Initialize and Migrate State
Run terraform init. If you already have a local state file, Terraform will detect it and ask if you want to migrate state to the remote backend.
terraform initYou'll see output like:
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new backend?
Enter a value: yes
Successfully configured the backend "s3"!
Type yes to migrate. Terraform copies your existing local state to S3 and updates the lock table.
After this, you can safely delete your local terraform.tfstate file — the remote copy is the source of truth.
Step 5: Verify State Is in S3
Confirm the state was written correctly:
# List the state file in S3
aws s3 ls s3://your-company-terraform-state/prod/vpc/
# Download and inspect it (careful — may contain sensitive data)
aws s3 cp s3://your-company-terraform-state/prod/vpc/terraform.tfstate /tmp/state-check.tfstate
cat /tmp/state-check.tfstate | head -20
# Check that versioning captured the initial write
aws s3api list-object-versions \
--bucket your-company-terraform-state \
--prefix prod/vpc/terraform.tfstateYou should see the state file listed and at least one version recorded. From this point, every terraform apply creates a new version in S3 — your full state history is preserved.
Common Errors and How to Fix Them
Error: NoSuchBucket
Error: Failed to get existing workspaces: S3 bucket does not exist.
The bucket name in your backend config doesn't match the actual bucket name, or the bucket is in a different region. Verify with aws s3 ls | grep terraform.
Error: Lock acquisition timeout
Error: Error acquiring the state lock
Lock Info:
ID: 12345678-...
Operation: OperationTypeApply
Who: user@machine
Created: 2026-03-11 ...
Another Terraform process is running (or was running and crashed without releasing the lock). If you're certain no other process is active, manually delete the lock:
aws dynamodb delete-item \
--table-name terraform-state-locks \
--key '{"LockID": {"S": "your-company-terraform-state/prod/vpc/terraform.tfstate"}}'Or use Terraform's built-in force-unlock:
terraform force-unlock <LOCK_ID>Error: AccessDenied
The IAM role or user running Terraform doesn't have S3 or DynamoDB permissions. The minimum required IAM permissions are:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:::your-company-terraform-state",
"arn:aws:s3:::your-company-terraform-state/*"
]
},
{
"Effect": "Allow",
"Action": ["dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:DeleteItem"],
"Resource": "arn:aws:dynamodb:us-east-1:*:table/terraform-state-locks"
}
]
}Bonus: Using Workspaces with Remote State
Terraform workspaces let you use a single configuration to manage multiple environments (staging, prod) with isolated state files. When you use workspaces with S3 backend, Terraform automatically namespaces the state key:
# Create and switch to a staging workspace
terraform workspace new staging
terraform workspace select staging
# Terraform now writes state to:
# s3://your-bucket/env:/staging/prod/vpc/terraform.tfstateThe env:/{workspace_name}/ prefix is added automatically. This gives you environment isolation without duplicating your backend configuration.
A word of caution: workspaces share the same backend configuration. If your staging and prod environments have meaningfully different configurations (different AWS accounts, different regions), separate backend configurations in separate directories is cleaner than workspaces.
Setting Up Terraform on Cloud Infrastructure
If you're building your Terraform infrastructure on AWS, DigitalOcean is an excellent alternative for teams who want simpler, more predictable pricing on their application infrastructure while keeping AWS for the parts where AWS shines. Their Spaces object storage is S3-compatible, meaning you can use the same backend configuration with minor endpoint changes.
For structured learning on Terraform at scale — state management, modules, workspaces, CI/CD integration — KodeKloud has a comprehensive Terraform course with hands-on labs that take you from first terraform init through production-grade multi-environment setups. The labs are particularly good because you practice against real cloud infrastructure, not simulations.
Summary
The S3 + DynamoDB remote state setup is three resources and a backend block:
- S3 bucket with versioning + encryption enabled
- DynamoDB table with
LockIDhash key backend "s3"block in your Terraform config with the bucket, key path, region, and table nameterraform initto migrate and activate the backend
Once this is in place, your entire team can run Terraform safely from any machine or CI/CD system, state is version-controlled and encrypted, and concurrent runs are protected by DynamoDB locks. This is the setup every production Terraform deployment should use.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Build a Complete AWS Infrastructure with Terraform from Scratch (2026)
Full project walkthrough: provision a production-grade AWS VPC, EKS cluster, RDS, S3, and IAM with Terraform. Real code, real architecture, ready to use.
Terraform vs Pulumi — Which IaC Tool Should You Choose? (2026)
An honest comparison of Terraform and Pulumi for Infrastructure as Code. Learn the real trade-offs, when to use each, and which one the industry is moving toward in 2026.
AI Agents for Automated Terraform Code Review — The Future of IaC Quality
How AI agents are automating Terraform code review with security scanning, cost estimation, best practice enforcement, and drift prevention. Covers practical tools, custom LLM pipelines, and CI/CD integration.