The idea for this article comes from
and also the basic structure of code is copied from there .
Thus a big thanks goes to mr Pandekar .
In data warehouse sometime is necessary to rebuild not only indexes, but also heaps, because typical analytical query retrieves goes with full table scan .
Moreover , SQL server 2008 does not allow to rebuild online a partitioned index .
It is normal in DWH system that admin needs to rebuild a partition, without putting the table offline.
Ten years ago I was working with Oracle , and I would have accomplished this task with something like :
Duplicate table target_table to target_table_new, rebuild indexes on target_table_new identical like
target_table, then rename target_table to target_table_drop , then rename target_table_new to target_table, or usign synonyms
-How rebuild partition works: Uses 2 temporary tables
Copies all partition rows to a temporary table b, build indexes on this tables
then switch target partition into temp tab a
then switch data from temp tab b into target partition
then drop temporaty tables a and b
- Command for execution- exec REBUILD_PARTITION 'SchemaName','TableName',PartitionNumber
- Limitations :
1) It should not be working with compressed tables or indexes ,
2) Will not work with table containing foreign keys or check constraints
3) Will not work with clustered primary key defined as desc
4) Works only with partitions range datetime
5) Operation is not ATOMIC : it means that if switch command fails, target partition will stay empty.
Anyway data are not lost, need to be inserted manually : see lines 463-473
- Successful Test results for-
1) Nonclustered PK + indexes
2) Clustered PK + indexes
- Possible improvements :
1) support for compression
2) insert into stage table b select * from target partition is slow, should be replace with bulk insert from openrowset in some way
3) Support for check and foreign key constraints : in general they are not used in data warehouse , because there are very few processes with modifies data of tables , so contraints to enforce data integrity are not necessary.
I encourage everybody to use my procedure and test it . If many people will be happy with it, I could consider putting this procedure into an automatic jobs to defragment database. Today the procedure works, but I think is wise to launch under human supervision.
Federico Iori, Bratislava ( Slovakia ) , firstname.lastname@example.org