• Actually this brought up another issue I have always wanted to solve. I have a store/product table mapping, ~83M rows and always growing as stores are added. The stores can have anywhere from a couple to 2.5M products in each. Of course the large stores make up the majority of the 83M rows.

    The problem is that whenever I update the product mix, I'm deleting/updating/inserting up to 2.5M rows in the 83M row table at a time. We threw lots of hardware at it, and brought 15 minutes down to 2-3 minutes, but still I'm trying to insert 2.5M records into an 83M record table and even with 36 hard drives, it just takes time.

    I wanted to use partitioned tables and functions, but that's only in SQL Enterprise edition. But in my research on partitioned tables, I needed to come up with a mechism to group my stores, because you have a limit of 1000 partitions (I think, it's been a while).

    In the statistics they group your prefix column into at most 200 steps. That's pretty much the same thing I wanted to do for my partitioned function.

    I just wish there was another way that when inserting/deleting the 2.5M records in my table it wouldn't do a total table lock and at 2-3 minutes it causes timeouts on any other store retrieving data from that table.

    Any ideas?

    Thanks!