Filegroup shrinking with partitioned table

  • Here's my scenario:

    I created 11 filegroups with a single file in them each. Each file was created initially with a size of about 10 gigs.

    I created a partitioning scheme & function which uses the year to determine which filegroup the data goes into.

    I then loaded 3 large tables (1 billion rows each) which had data from 2000 - 2010. The data spread across all the filegroups/files perfectly.

    Now, I have decided to delete 2 of the tables with no need to load them in the future.

    How can I:

    1) shrink the files down to as small a size as possible

    2) make sure that the indexes aren't overly fragmented because of the shrinking. (I have read that DBCC SHRINKFILE will overly fragment the indexes)


    David

  • Paul Randal has written about this quite a bit.

    The recommendation is to move all your tables and indexes to a new partition then delete the old one (shrinking if primary). Alternatively you can run DBCC Shrinkfile and then rebuild all the clustered and nonClustered indexes to address the fragmentation.

  • Thank you. I had read that article and was hoping I wouldn't have to create another 11 filegroups/files and move and then remove. I guess I didn't realize I could just shrink the files and then rebuild the indexes...which seems easier in my situation.


    David

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply