• rbramhall (10/28/2009)


    The problem is that whenever I update the product mix, I'm deleting/updating/inserting up to 2.5M rows in the 83M row table at a time. We threw lots of hardware at it, and brought 15 minutes down to 2-3 minutes, but still I'm trying to insert 2.5M records into an 83M record table and even with 36 hard drives, it just takes time.

    I would think this question is better suited to one of the code forums than to discussion on an excellent article on index statistics, but...

    If all you have is a product-store mapping (storeid, productid) I can't see that inserting 2.5 M rows would take 3 minutes. Therefore I suspect that you have a lot of other stuff in the table.

    That being the case, I think my first instinct would be to partition the table on storeid. That doesn't mean that you have to have one partition per store: maybe the bit stores each have their own and the smaller ones are grouped together. If you've got tables with ~100M rows in them, you probably should be using Enterprise.

    If there is no way to make that work for some reason (cultural or otherwise) I would try a phased insert/update/delete, working with 10,000 rows at a time. That won't improve the end-to-end speed of your process, but if you choose the right number it will prevent lock escalation.

    Other people may have ideas regarding such things as BULK INSERT, but I haven't worked with it much.