snippetMinor
How to maintain a global index on a large table undergoing regular partition purges?
Viewed 0 times
undergoingglobalpartitionregularmaintainpurgeslargehowindextable
Problem
Problem
I've got giant table
The issue arises at the end of the month when the oldest table partition is dropped. I can't afford to rebuild a global index on a 3 TB table every month. Only the four most recent month partitions of
Attempted Solutions
I've tried creating an index on column
I've also tried creating an index on columns (
Of course, I can't simply create a locally partitioned index. I would need the partitions to be local but without the subpartitions. From what I can tell, that's not possible; it's an all or nothing choice with the local partitioning option on indexes.
Dropping Partitions
For completeness, here's a few options I've tried for dropping old partitions at the end of the month...
This takes far too long, as it attempts to maintain the global indexes during the drop.
I've got giant table
haystack weighing in at nearly 3 TB, partitioned by month (date) and subpartitioned by client. I need to create a global index that spans all clients to facilitate querying by column needle. This is simple enough.The issue arises at the end of the month when the oldest table partition is dropped. I can't afford to rebuild a global index on a 3 TB table every month. Only the four most recent month partitions of
haystack are in the database at any given time. So roughly 25% of the table is removed when dropping a monthly partition.Attempted Solutions
I've tried creating an index on column
needle which was global partitioned by hash on needle. But this means having to rebuild the entire global index when dropping an old month partition from the table. No good.I've also tried creating an index on columns (
date, needle) which was global partitioned by range on date with ranges that match the table's monthly partitions. But again, I'm still forced to update my entire global index when dropping that old month partition from the table - even though my table and index are partitioned identically (excepting the table's subpartitions). Of course, I can't simply create a locally partitioned index. I would need the partitions to be local but without the subpartitions. From what I can tell, that's not possible; it's an all or nothing choice with the local partitioning option on indexes.
Dropping Partitions
For completeness, here's a few options I've tried for dropping old partitions at the end of the month...
alter table HAYSTACK drop partition P_MONTH update global indexes;This takes far too long, as it attempts to maintain the global indexes during the drop.
alter table HAYSTACK drop partition P_MONTH; Without the global index clause, my global index is left with all of its partitions marked unusable. (Understandably so.)alter index HAYSTACK_IDX drop partition P_MONTH; Dropping my index partition invalidateSolution
version <= 11.2? No.*
But in 12c, there is a new feature called Asynchronous Global Index Maintenance.
The
More details about this: https://richardfoote.wordpress.com/category/asynchronous-global-index-maintenance/
*: There is, but I don't think it is applicable in your case. Deleting (and NOT truncating) all rows from the partition before dropping it leaves the global index in usable state. But this is enormous extra work for the 25% of a 3 TB table.
But in 12c, there is a new feature called Asynchronous Global Index Maintenance.
The
DROP PARTITION operation completes immediately, leaving the global index in usable state. The trick is, that index entries are not maintained, they are orphaned, and these orphaned entries will be cleaned up later. The cleanup process happens automatically in the maintenance window, or it can be started manually.More details about this: https://richardfoote.wordpress.com/category/asynchronous-global-index-maintenance/
*: There is, but I don't think it is applicable in your case. Deleting (and NOT truncating) all rows from the partition before dropping it leaves the global index in usable state. But this is enormous extra work for the 25% of a 3 TB table.
Context
StackExchange Database Administrators Q#102614, answer score: 4
Revisions (0)
No revisions yet.