• jdbrown239 (3/5/2015)


    @Alan Thanks Alan

    The Paul White post is great! I learn something every time I post in this forum. I am going to setup a test using your example with make_parallel.

    @mm Thank you also MM

    The technet tip is also good and might explain why the update might always be serial.

    @jeff

    I have the task of running multiple updates to a table with a row count of 107 million. Even after splitting the table in to smaller tables (about 38 million rows ea) the updates are still very time consuming.

    I thank you all for your help.

    Doug

    The problem is likely a thing called the "tipping point". Every system has one. On my older computer at home, the tipping point comes into play for over 3 million rows (depending on the width of the table). It takes just 3 seconds to update 1 Million rows. For 2 million rows it only takes 6 seconds and for 3 million rows it takes only 9 seconds. All as expected. For 4 million rows, though, it takes well over 2 hours because I've gone past the mysterious (related to the amount of memory available) but ever present "tipping point". You also have to consider what a mega-update does to the log file even if you're in the simple recovery mode.

    I recommend breaking up the updates along with a proper index so you can have an expedient WHERE clause to filter out the rows that have already been updated. The index does not need to be permanent.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)