gotchabashCritical
Terraform state locking prevents concurrent apply race conditions
Viewed 0 times
terraformstate lockDynamoDBS3 backendrace conditionconcurrent apply
aws
Error Messages
Problem
Two terraform apply runs executing simultaneously against the same state file corrupt the state. One apply reads stale state, both write conflicting changes, and infrastructure drifts from the declared configuration.
Solution
Use a backend that supports state locking. For S3 + DynamoDB:
DynamoDB table must have LockID as the partition key (String type).
In CI, use terraform plan + apply in sequence with the -lock-timeout flag:
terraform {
backend "s3" {
bucket = "my-tfstate-bucket"
key = "prod/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}DynamoDB table must have LockID as the partition key (String type).
In CI, use terraform plan + apply in sequence with the -lock-timeout flag:
- run: terraform plan -lock-timeout=10m -out=tfplan
- run: terraform apply -lock-timeout=10m tfplanWhy
DynamoDB provides conditional writes with atomic lock acquisition. If a lock exists, the second apply waits or fails rather than proceeding with stale state.
Gotchas
- If a CI job is cancelled mid-apply, the lock may remain in DynamoDB; use terraform force-unlock with the lock ID to recover
- Terraform Cloud and HCP Terraform provide state locking automatically without DynamoDB
- Never use local state in CI—it is lost when the runner terminates
Revisions (0)
No revisions yet.