principlejavascriptCritical
Backup strategies: 3-2-1 rule and automated verification
Viewed 0 times
backup3-2-1 ruledisaster recoverypg_dumprestoration testingoffsiteS3
Problem
Backups exist but have never been tested. When data loss occurs, the restore process fails or restores corrupt data.
Solution
Follow the 3-2-1 rule and automate restoration testing:
3-2-1 rule:
#!/bin/bash
# Automated Postgres backup with verification
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="/backups/db_$DATE.sql.gz"
# Create backup
pg_dump $DATABASE_URL | gzip > $BACKUP_FILE
# Verify backup is readable and non-empty
if [ ! -s $BACKUP_FILE ]; then
echo "BACKUP FAILED: empty file" >&2
exit 1
fi
GZ_TEST=$(gzip -t $BACKUP_FILE 2>&1)
if [ $? -ne 0 ]; then
echo "BACKUP FAILED: corrupt gzip: $GZ_TEST" >&2
exit 1
fi
# Upload to offsite storage (S3)
aws s3 cp $BACKUP_FILE s3://$BACKUP_BUCKET/db/
# Delete local backups older than 7 days
find /backups -name 'db_*.sql.gz' -mtime +7 -delete
echo "Backup complete: $BACKUP_FILE"3-2-1 rule:
- 3 copies of data
- 2 different storage media/services
- 1 offsite copy
Why
An untested backup is not a backup. Backups that fail silently (empty files, corrupt archives) are worse than no backup because they create false confidence.
Gotchas
- Test restores quarterly — create a staging environment, restore from backup, verify data integrity
- Encrypt backups before uploading to cloud storage — even private S3 buckets can be misconfigured
- pg_dump is not crash-consistent for very large databases — consider WAL archiving with pg_basebackup for large datasets
- Retention policy is part of the backup strategy — know how far back you can restore
Revisions (0)
No revisions yet.