Terraform Plan Shows Unexpected Destroy — How to Fix It
Fix Terraform plan showing unexpected resource destruction. Covers state drift, provider upgrades, import mismatches, lifecycle rules, and safe recovery strategies.
You run terraform plan expecting a minor change, and instead see:
Plan: 0 to add, 0 to change, 14 to destroy.
Your stomach drops. Fourteen resources marked for destruction — including your production database, load balancer, and DNS records. You didn't touch any of those. What happened?
This is one of the scariest moments in infrastructure management, and it happens more often than anyone admits. Let's understand why and fix it safely.
Why Terraform Wants to Destroy Your Resources
There are five common causes, ranked by frequency:
Cause 1: State Drift — Someone Changed Resources Outside Terraform
The most common cause. Someone modified a resource through the AWS console, CLI, or another tool. Now Terraform's state doesn't match reality, and the plan shows a destroy-and-recreate to "fix" the drift.
How to diagnose:
# See what Terraform thinks the current state is
terraform show
# Refresh state to match reality
terraform plan -refresh-onlyIf the -refresh-only plan shows changes, your state is stale.
How to fix:
# Option 1: Accept the current reality into state
terraform apply -refresh-only
# Option 2: If someone changed a resource and you want to keep those changes
# Update your .tf files to match, then refresh
terraform apply -refresh-onlyCause 2: Provider or Terraform Version Upgrade
You upgraded the AWS provider from 5.x to 5.y, and the new version handles certain resource attributes differently. The provider now sees a "difference" that requires replacement.
How to diagnose:
# Check what changed
terraform plan -detailed-exitcode 2>&1 | grep "forces replacement"Look for lines like:
# aws_instance.web must be replaced
-/+ resource "aws_instance" "web" {
~ ami = "ami-12345" -> "ami-12345" # forces replacement
How to fix:
Check the provider changelog for breaking changes. Often you need to add lifecycle rules:
resource "aws_instance" "web" {
# ... config ...
lifecycle {
ignore_changes = [ami]
}
}Or pin the provider version until you're ready to handle the upgrade:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.40.0" # Pin to minor version
}
}
}Cause 3: Resource Moved or Renamed in Code
You renamed a resource block:
# Before
resource "aws_s3_bucket" "data" { ... }
# After (renamed)
resource "aws_s3_bucket" "app_data" { ... }Terraform sees this as: destroy aws_s3_bucket.data and create aws_s3_bucket.app_data.
How to fix:
Use the moved block (Terraform 1.1+):
moved {
from = aws_s3_bucket.data
to = aws_s3_bucket.app_data
}Or move in state manually:
terraform state mv aws_s3_bucket.data aws_s3_bucket.app_dataCause 4: Module Refactoring
You moved resources into or out of a module:
# Before: top-level resource
resource "aws_rds_instance" "main" { ... }
# After: inside a module
module "database" {
source = "./modules/database"
}Terraform sees this as destroying the old resource and creating a new one.
How to fix:
terraform state mv aws_rds_instance.main module.database.aws_rds_instance.mainCause 5: count/for_each Index Shift
You removed an item from the middle of a count-based resource:
# Before
variable "subnets" {
default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}
resource "aws_subnet" "main" {
count = length(var.subnets)
cidr_block = var.subnets[count.index]
}
# After (removed middle subnet)
variable "subnets" {
default = ["10.0.1.0/24", "10.0.3.0/24"] # removed 10.0.2.0/24
}Terraform now wants to:
- Keep
aws_subnet.main[0](10.0.1.0/24) - Destroy
aws_subnet.main[1](10.0.2.0/24) and recreate it as 10.0.3.0/24 - Destroy
aws_subnet.main[2](10.0.3.0/24) — no longer exists in the list
How to fix — migrate to for_each:
resource "aws_subnet" "main" {
for_each = toset(var.subnets)
cidr_block = each.value
}With for_each, each resource is keyed by value, not index. Removing an item only destroys that specific resource.
Migration:
# Move from count index to for_each key
terraform state mv 'aws_subnet.main[0]' 'aws_subnet.main["10.0.1.0/24"]'
terraform state mv 'aws_subnet.main[1]' 'aws_subnet.main["10.0.2.0/24"]'
terraform state mv 'aws_subnet.main[2]' 'aws_subnet.main["10.0.3.0/24"]'Emergency Procedures: What to Do Right Now
If you see unexpected destroys and need to act:
Step 1: Don't Apply
Obvious, but worth stating. Never auto-approve plans you haven't reviewed. If you have CI/CD auto-applying, stop it.
Step 2: Save the Plan for Analysis
terraform plan -out=scary-plan.tfplan
terraform show -json scary-plan.tfplan > scary-plan.json
# See what's being destroyed
cat scary-plan.json | jq '.resource_changes[] | select(.change.actions | contains(["delete"])) | .address'Step 3: Lock the State
If multiple people might run Terraform:
# If using S3 backend with DynamoDB
# The lock is automatic during plan/apply
# But you can also manually prevent others from runningStep 4: Targeted Plan
Check specific resources:
terraform plan -target=aws_rds_instance.main -target=aws_lb.mainThis helps isolate which changes are intentional vs unexpected.
Step 5: State Surgery (If Needed)
# Backup state FIRST
terraform state pull > state-backup-$(date +%Y%m%d).json
# Remove a resource from state (Terraform will "forget" it)
terraform state rm aws_instance.problematic
# Re-import it
terraform import aws_instance.problematic i-0abc123def456Prevention: Stop This from Happening Again
1. Use prevent_destroy for Critical Resources
resource "aws_rds_instance" "production" {
# ... config ...
lifecycle {
prevent_destroy = true
}
}Now terraform apply will error out instead of destroying this resource.
2. Lock Provider Versions
terraform {
required_version = ">= 1.7.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.40.0"
}
}
}3. Use for_each Instead of count
Avoid index-shift problems entirely:
# Bad: count with dynamic lists
resource "aws_subnet" "main" {
count = length(var.subnets)
...
}
# Good: for_each with stable keys
resource "aws_subnet" "main" {
for_each = { for s in var.subnets : s.name => s }
...
}4. Enable Drift Detection
Run terraform plan on a schedule to catch drift early:
# GitHub Actions - weekly drift check
name: Terraform Drift Detection
on:
schedule:
- cron: "0 9 * * 1" # Every Monday at 9 AM
jobs:
drift-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: |
terraform plan -detailed-exitcode -refresh-only
EXIT_CODE=$?
if [ $EXIT_CODE -eq 2 ]; then
echo "DRIFT DETECTED"
# Send Slack notification
fi5. Plan Review in CI/CD
Never auto-apply in production. Use Atlantis or Spacelift for PR-based review:
PR opened → terraform plan runs → Plan output posted as PR comment →
Human reviews → Approves → terraform apply
Quick Reference Cheat Sheet
# See what's being destroyed
terraform plan -json | jq '.resource_changes[] | select(.change.actions[] == "delete") | .address'
# Refresh state without changes
terraform apply -refresh-only
# Backup state
terraform state pull > backup.json
# Move resource in state
terraform state mv old_address new_address
# Remove from state (forget, don't destroy)
terraform state rm resource_address
# Import existing resource
terraform import resource_address cloud_resource_id
# Target specific resources
terraform plan -target=resource_addressIf you're building Terraform skills, KodeKloud's Terraform courses include real lab scenarios where you can practice state management safely without risking production resources.
The scariest Terraform output is destroy. The safest habit is: always read the plan, always back up the state, never auto-approve in production.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Terraform Plan Succeeds But Apply Fails: How to Fix State Drift and Provider Errors
Your terraform plan looks clean but apply blows up? Here's how to fix provider conflicts, state drift, and dependency errors step by step.
Terraform Error Acquiring the State Lock: Causes and Fix
Terraform state lock errors can block your entire team. Learn why they happen, how to safely unlock state, and how to prevent lock conflicts for good.
How to Use AI Agents to Automate Terraform Infrastructure Changes in 2026
AI agents can now plan, review, and apply Terraform changes from natural language. Here's how agentic AI is transforming infrastructure-as-code workflows.