March 12, 2009 at 3:24 pm
I am hoping someone can point me in the right direction. I have 100 million rows which I plan to store in a table with 13 partitions on its own filegroup. Each partition is based on a week and the table will slide. For example, I will add a new partition each week and remove the oldest one. I only need to keep the current 13 weeks.
My question is regarding how to handle the indexes. I have read a number of articles, however, I am not sure the best approach.
I was planning on:
1) drop my indexes
2) switch the partition to a table structured the same way
3) Alter partition function to merge and split the ranges
4) create the indexes on the table again
5) truncate the old data which was switched in step 2.
This means I am removing close to 7 million rows each week. Is this the best way to do it? I am nervous about indexing a table weekly with 100 million rows. My guess is this could take an hour and seems to be a waste in regards to the partition concept.
Is there a better way to do this?
I have attached an example to help illustrate.
Any help and/or ideas would be greatly appreciated.
March 16, 2009 at 8:59 am
Does anyone have any experience with this?
Viewing 2 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply