patternsqlMinor
Enabling page compression on 1.5TB table
Viewed 0 times
5tbcompressionpageenablingtable
Problem
I need to come up with the best approach possible to enable page compression on a table 1500 GB in size, with 363,957,740 rows. The database size itself is 1.71 TB and holds archive data.
If I understand correctly, it needs disk space (possibly the same amount, just to be on the safer side) so that it can create a copy of the table with page compression enabled and release the space. This is in
I have spoken to my capacity planning resource and he agreed to give additional temporary required space for this maintenance, and take back the space once this activity is complete. Having said this, do you think the best approach to be:
Also, instead of enabling page compression. After Step #3, truncate the largest table, enable page compression and then do a
Does this enable page compression on existing indexes?
I do not have Test environment to test the above steps. Alternatively, if any better approach is available then please let me know.
The goal is to minimize the future disk space required at its current growth. We are an ERP software company and we have licensed Enterprise Edition. This table is only for archive when some checks are performed and all the data resides in this table. I have 2 indexes (1 CI, 1 NON-CI). None of the columns are
If I understand correctly, it needs disk space (possibly the same amount, just to be on the safer side) so that it can create a copy of the table with page compression enabled and release the space. This is in
FULL recovery model so, it will be heavily logged.I have spoken to my capacity planning resource and he agreed to give additional temporary required space for this maintenance, and take back the space once this activity is complete. Having said this, do you think the best approach to be:
- Take a full backup
- Restore the backup onto new temp drives
- Change the recovery model to
SIMPLE
- Enable page compression
- Take a full backup after compression and restore original database
- Change recovery model to
FULL
Also, instead of enabling page compression. After Step #3, truncate the largest table, enable page compression and then do a
SELECT * INTO ReplicaDB.dbo.ReplicaTblDoes this enable page compression on existing indexes?
I do not have Test environment to test the above steps. Alternatively, if any better approach is available then please let me know.
The goal is to minimize the future disk space required at its current growth. We are an ERP software company and we have licensed Enterprise Edition. This table is only for archive when some checks are performed and all the data resides in this table. I have 2 indexes (1 CI, 1 NON-CI). None of the columns are
VARCHAR (MAX), they are either NVARCHAR, int or date type.Solution
Your plan sounds well except for this one:
Also, instead of enabling page compression. After Step #3, Truncate
the largest table, enable page compression and then do a SELECT * INTO
ReplicaDB.dbo.ReplicaTbl ?
If you truncate a table and enable a page compression, you must do
Besides this, rebuilding with compression is much faster that an insert into a compressed table, we have a difference in 3-5 times on large tables.
Have a look here: The Data Loading Performance Guide, section Data Compression and Bulk Load:
When data is bulk loaded into a compressed table or partition, both
page-level and row-level compression are generally done at bulk load
time. However, you should be aware of an exception: When bulk loading
into a page compressed heap, you must use the TABLOCK hint to achieve
page compression. If the TABLOCK hint is not used, only row level
compression is done on the heap. Using page compression on a bulk
target will generally slow down bulk load speeds – especially if the
I/O system is able to supply enough speed to saturate the CPUs.
Also, instead of enabling page compression. After Step #3, Truncate
the largest table, enable page compression and then do a SELECT * INTO
ReplicaDB.dbo.ReplicaTbl ?
If you truncate a table and enable a page compression, you must do
INSERT and not SELECT INTO and at this point you should use TABLOCK to get page compression and not row compression only.Besides this, rebuilding with compression is much faster that an insert into a compressed table, we have a difference in 3-5 times on large tables.
Have a look here: The Data Loading Performance Guide, section Data Compression and Bulk Load:
When data is bulk loaded into a compressed table or partition, both
page-level and row-level compression are generally done at bulk load
time. However, you should be aware of an exception: When bulk loading
into a page compressed heap, you must use the TABLOCK hint to achieve
page compression. If the TABLOCK hint is not used, only row level
compression is done on the heap. Using page compression on a bulk
target will generally slow down bulk load speeds – especially if the
I/O system is able to supply enough speed to saturate the CPUs.
Context
StackExchange Database Administrators Q#182275, answer score: 5
Revisions (0)
No revisions yet.