Terraform Multi-Environment Setup with Workspaces — Complete Guide (2026)
Stop duplicating Terraform code for dev, staging, and prod. Use Terraform workspaces to manage multiple environments from one codebase. Step-by-step guide with real AWS examples.
Every real project has at least three environments: dev, staging, and production. The wrong way is to copy-paste your Terraform code three times and maintain them separately. The right way is one codebase, multiple environments.
Terraform workspaces let you do exactly this.
The Problem: Managing Multiple Environments
Without a multi-environment strategy, teams end up with:
terraform/
├── dev/
│ ├── main.tf # copy of prod
│ ├── variables.tf # copy of prod
│ └── terraform.tfstate
├── staging/
│ ├── main.tf # copy of prod with slight changes
│ ├── variables.tf # copy of prod
│ └── terraform.tfstate
└── prod/
├── main.tf # the "real" one
├── variables.tf
└── terraform.tfstate
Problems:
- Bug fix in
main.tfmust be applied in 3 places - Environments drift apart over time
- Easy to miss applying a security patch to staging
- 3x the code to review, 3x the bugs
Two Approaches: Workspaces vs Directories
Before diving in, know there are two main strategies:
Terraform Workspaces: Single code directory, separate state per workspace. Switch environments with terraform workspace select. Good for similar environments (same infrastructure, different sizes/counts).
Directory-based (with modules): Separate directories per environment, shared modules. Each environment is explicit. Better for environments with significant differences.
This guide covers workspaces. For very different environments (prod has RDS Multi-AZ + CloudFront, dev has SQLite), the directory approach is better. For "same infrastructure, different scale," workspaces work well.
What Terraform Workspaces Do
Every Terraform configuration has a workspace. The default workspace is called default.
When you create a new workspace, Terraform:
- Creates a separate state file for that workspace
- Sets
terraform.workspacevariable to the workspace name - Lets your code branch based on the workspace name
# See current workspace
terraform workspace show
# default
# Create and switch to dev workspace
terraform workspace new dev
# Create staging and prod
terraform workspace new staging
terraform workspace new prod
# List all workspaces
terraform workspace list
# default
# * dev ← current
# staging
# prodProject Structure
terraform/
├── main.tf
├── variables.tf
├── outputs.tf
├── locals.tf ← environment-specific config lives here
└── terraform.tfvars ← shared defaults
No subdirectories. One set of .tf files. Workspace determines which environment you're touching.
locals.tf — The Heart of the Pattern
This file defines environment-specific values using terraform.workspace:
locals {
env = terraform.workspace # "dev", "staging", or "prod"
# Environment-specific config map
config = {
dev = {
instance_type = "t3.micro"
min_size = 1
max_size = 2
desired_size = 1
db_instance_class = "db.t3.micro"
db_multi_az = false
enable_nat = false # save cost in dev
domain = "dev.myapp.com"
}
staging = {
instance_type = "t3.small"
min_size = 1
max_size = 3
desired_size = 2
db_instance_class = "db.t3.small"
db_multi_az = false
enable_nat = true
domain = "staging.myapp.com"
}
prod = {
instance_type = "m5.large"
min_size = 2
max_size = 10
desired_size = 3
db_instance_class = "db.r5.large"
db_multi_az = true
enable_nat = true
domain = "myapp.com"
}
}
# Shortcut to current environment config
current = local.config[local.env]
}Now reference these anywhere in your Terraform code:
# Instead of hardcoding t3.micro, use:
instance_type = local.current.instance_typemain.tf — EKS Example with Workspace Config
terraform {
required_version = ">= 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "myapp-terraform-state"
key = "infra/terraform.tfstate"
region = "ap-south-1"
dynamodb_table = "terraform-state-lock"
encrypt = true
# Workspaces automatically append workspace name to the key:
# infra/env:/terraform.tfstate
# e.g., infra/dev/terraform.tfstate
workspace_key_prefix = "infra"
}
}
provider "aws" {
region = var.region
default_tags {
tags = {
Environment = local.env
Project = var.project_name
ManagedBy = "terraform"
}
}
}VPC with Environment-Aware Config
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "${var.project_name}-${local.env}-vpc"
cidr = var.vpc_cidr
azs = ["${var.region}a", "${var.region}b", "${var.region}c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
# Only enable NAT in staging and prod (cost saving in dev)
enable_nat_gateway = local.current.enable_nat
single_nat_gateway = local.env != "prod" # prod gets one NAT per AZ
enable_dns_hostnames = true
}EKS Node Group with Environment Sizing
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = "${var.project_name}-${local.env}"
cluster_version = "1.29"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_endpoint_public_access = true
eks_managed_node_groups = {
main = {
# Size comes from workspace config
instance_types = [local.current.instance_type]
min_size = local.current.min_size
max_size = local.current.max_size
desired_size = local.current.desired_size
}
}
}RDS with Multi-AZ Only in Prod
resource "aws_db_instance" "main" {
identifier = "${var.project_name}-${local.env}-db"
engine = "postgres"
engine_version = "16.1"
instance_class = local.current.db_instance_class
allocated_storage = local.env == "prod" ? 100 : 20
db_name = var.db_name
username = var.db_username
password = var.db_password
# Multi-AZ only in production
multi_az = local.current.db_multi_az
# Skip final snapshot in dev (saves cost/cleanup time)
skip_final_snapshot = local.env != "prod"
# Only prod gets deletion protection
deletion_protection = local.env == "prod"
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.main.name
tags = {
Name = "${var.project_name}-${local.env}-db"
}
}State Backend: Separate State Per Workspace
The S3 backend with workspace_key_prefix automatically creates separate state files:
s3://myapp-terraform-state/
├── infra/dev/terraform.tfstate
├── infra/staging/terraform.tfstate
└── infra/prod/terraform.tfstate
Each workspace has completely isolated state. Destroying dev never touches prod state.
Create the S3 bucket and DynamoDB table first (bootstrap — run once manually or with a separate Terraform config):
# bootstrap/main.tf — run this once before everything else
resource "aws_s3_bucket" "terraform_state" {
bucket = "myapp-terraform-state"
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_dynamodb_table" "terraform_lock" {
name = "terraform-state-lock"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}Daily Workflow
# Work on dev
terraform workspace select dev
terraform plan
terraform apply
# Promote to staging (after dev is stable)
terraform workspace select staging
terraform plan # review what changes
terraform apply
# Promote to prod (after staging validation)
terraform workspace select prod
terraform plan # careful review
terraform applyCI/CD Integration (GitHub Actions)
name: Terraform Deploy
on:
push:
branches: [main] # triggers prod deploy
pull_request:
branches: [main] # triggers dev plan
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.7.0
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-south-1
- name: Terraform Init
run: terraform init
- name: Select workspace
run: |
# Use branch name to determine environment
if [ "${{ github.ref }}" == "refs/heads/main" ]; then
terraform workspace select prod || terraform workspace new prod
elif [ "${{ github.ref }}" == "refs/heads/staging" ]; then
terraform workspace select staging || terraform workspace new staging
else
terraform workspace select dev || terraform workspace new dev
fi
- name: Terraform Plan
run: terraform plan -out=tfplan
- name: Terraform Apply
if: github.ref == 'refs/heads/main'
run: terraform apply tfplanWorkspaces vs Terragrunt
For complex setups, Terragrunt is an alternative that also manages multi-environment infrastructure. Terragrunt uses directory-based environments with DRY configuration inheritance.
Use workspaces when: single module, similar environments, small team.
Use Terragrunt when: many modules with dependencies, complex DRY requirements, large mono-repo.
Summary
| Concept | What it does |
|---|---|
terraform workspace new dev | Creates dev workspace + isolated state |
terraform workspace select prod | Switch to prod — all commands now target prod |
terraform.workspace | Variable containing current workspace name |
locals.config[local.env] | Pattern to load env-specific values |
| S3 + workspace_key_prefix | Separate state files per workspace automatically |
One codebase. Three environments. Zero duplication.
Related: Terraform Remote State with S3 and DynamoDB | Build AWS Infrastructure with Terraform from Scratch
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Build a Complete AWS Infrastructure with Terraform from Scratch (2026)
Full project walkthrough: provision a production-grade AWS VPC, EKS cluster, RDS, S3, and IAM with Terraform. Real code, real architecture, ready to use.
How to Set Up Terraform Remote State with S3 and DynamoDB (Step by Step)
Storing Terraform state locally breaks team workflows and risks data loss. This guide shows you exactly how to configure remote state with S3 and DynamoDB locking — the production standard setup.
Terraform S3 Access Denied — How to Fix It (2026)
Getting AccessDenied when Terraform tries to read or write your S3 backend? Here's every cause and the exact fix.