HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlMinor

Backup tuning for large databases

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
databasestuninglargeforbackup

Problem

I am tuning SQL Server backups using compression, writing to multiple files, and changing the BufferCount and MaxTransferSize options. I have a SQL Server instance with a 4 TB database and several smaller databases from a few MB to 150 GB.

My question is, can I start tuning with the 150 GB DB and then apply those settings to the 4 TB backup with minimal changes, or does the 4 TB require its own "tuning session" from scratch? Since it usually takes several backup attempts to dial in the settings I'm trying to get a jump-start on tuning the 4 TB database if possible.

Solution

Backing up the entire database to a single file would be much slower. You can consider below points

  • Stripe your backups across different spindles on a SAN.



  • Use backup compression (costs you more CPU) but it results in fewer overall I/O operations during backup and restore operations.



  • you can use the trace flags 3605 and 3213 to find out the default BUFFERCOUNT value used in your backup and then tune accordingly.



  • Set your max memory away from default as when you increase BUFFERCOUNT and MAXTRANSFERSIZE, they require additional memory.



  • For your 4TB database, assuming you have different filegroups for putting recent and historical data, you can mark the historical data as READ_ONLY and then just back it up once. You should perform regular backups of the read/write filegroups.



  • Use BACKUP ... WITH CHECKSUM as it places a checksum on every page in the backup file which helps in detection of corrupt pages in the backup file when doing restores.



  • Best is to backup locally on the server (provided you have enough disk space) and then transfer the backup file to a remote backup location -- or -- if you are backing up directly to a network share, then best is to use Multiple NICs and/or just use differnt logical subnets in the network.



Below is from the Whitepaper - A Technical Case Study: Fast and Reliable Backup and Restore of Multi-Terabytes Database over the Network :

As a side note, its more important to test your restore strategy as a backup is ONLY GOOD if it can be restored without any issues. Enabling Instant File Initialization will considerably cut down the restore time.

Be careful when you are striping your backups as there are chances that a stripe might get corrupted or might become missing (deleted) due to machine or human error :-)

References :

  • Consider efficient backup strategies



  • Backup More Than 1GB per Second Using SQL2008 Backup Compression



  • How to increase SQL Database Full Backup speed using compression and Solid State Disks

Context

StackExchange Database Administrators Q#82501, answer score: 3

Revisions (0)

No revisions yet.