snippetsqlMinor
How to partition a very large table with limited db space?
Viewed 0 times
partitionspacehowwithlargeverytablelimited
Problem
I have a very big table. I want to partition it, but I cannot.
Database size: 1TB, Free space 200GB
Table:
In order to partition it, I need to create a clustered index. But creating parition requires free space of the same size as the table, and I do not have extra 600GB.
Are there any options how I partition this table?
EDIT 1:
I have tried copying data into separate table.
However when I try to
Database size: 1TB, Free space 200GB
Table:
- Size: 165 columns (row lengh 4216 KB, no LOBs), 500 million rows, 600GB of data.
- Possible partitioning: a day per partition
- Number of rows per day/parition: 2 million
In order to partition it, I need to create a clustered index. But creating parition requires free space of the same size as the table, and I do not have extra 600GB.
Are there any options how I partition this table?
EDIT 1:
I have tried copying data into separate table.
However when I try to
DELETE (or INSERT) 1 day of data into another table, I get an error, that tranasaction log is full and my transaction is being rolled back. My transaction log is approx 20 GB and I cannot make it any bigger.Solution
You'll need to create a new table with the same schema, but as a partitioned object. Optionally you can compress the table to save even more space. As you are only putting on average one row per page I'm not sure how much space savings you will see. I'd recommend putting a few thousand rows into the new table then compressing to see if the space savings is worth the CPU overhead.
As for how to move this much data without eating up all your drive space and without bloating your transaction log, that'll need to be done in a loop moving small amounts of data per run. You'll want to do some data analysis to see how large of a window you can process, but I'll assume that based on the data volumes you need to move the rows one minute at a time.
When everything is done and you've verified that all the data is in the new table drop the old table and rename the new table so that it has the old tables name. This way nothing breaks. You'll want to script out the permissions on the old table (if there are any) so that you can apply them to the new table.
If there are tables with foreign keys to this table you'll need to drop them before this will work.
As for how to move this much data without eating up all your drive space and without bloating your transaction log, that'll need to be done in a loop moving small amounts of data per run. You'll want to do some data analysis to see how large of a window you can process, but I'll assume that based on the data volumes you need to move the rows one minute at a time.
DECLARE @processFrom as datetime
SELECT @processFrom = min(YourDateColumn)
FROM YourTable
DECLARE TABLE @Rows (...)
WHILE EXISTS (SELECT * FROM YourTable)
BEGIN
DELETE TOP (10000) FROM YourTable
OUTPUT DELETED.* INTO @Rows
WHERE YourDateColumn = @processFrom
INSERT INTO NewTable
(...)
SELECT ...
FROM @Rows
DELETE FROM @Rows
IF @@ROWCOUNT = 0
SET @processFrom = dateadd(dd, 1, @processFrom)
ENDWhen everything is done and you've verified that all the data is in the new table drop the old table and rename the new table so that it has the old tables name. This way nothing breaks. You'll want to script out the permissions on the old table (if there are any) so that you can apply them to the new table.
If there are tables with foreign keys to this table you'll need to drop them before this will work.
Code Snippets
DECLARE @processFrom as datetime
SELECT @processFrom = min(YourDateColumn)
FROM YourTable
DECLARE TABLE @Rows (...)
WHILE EXISTS (SELECT * FROM YourTable)
BEGIN
DELETE TOP (10000) FROM YourTable
OUTPUT DELETED.* INTO @Rows
WHERE YourDateColumn = @processFrom
INSERT INTO NewTable
(...)
SELECT ...
FROM @Rows
DELETE FROM @Rows
IF @@ROWCOUNT = 0
SET @processFrom = dateadd(dd, 1, @processFrom)
ENDContext
StackExchange Database Administrators Q#44219, answer score: 5
Revisions (0)
No revisions yet.