Merge performance on Very Large Tables

  • You could try the suggested index, but I doubt that it would have any effect on the Insert operation. The suggested index has nothing to do with the columns in the Join. An index on the columns in the Join in the temp table could help with the Update - i.e. find the matching records faster, but I don't see how it would help with the Insert since everything else but the matching records are inserted.

    I would still play with separate Update and Insert statements and benchmark each one separately.

    Todd Fifield

  • To find out what slows down the load performance when nothing else helps I would use Windows System Monitor (Performance Monitor) during the load specifically looking at the Disk Time and Avg Disk Queue length. Because of the size of a single record in the target table, which is over 1.5 Kb, the load can produce significant amount of disk operation, while the disk remains the slowest component of the server. I would also check the size of tempdb database during the load, it might grow significantly doing the merge operation, which also produces disk I/O. If you find the Disk Queue length > 1 on average during the load, than it is the disk that is the bottleneck. The only thing you can do to reduce the disk spinning would be either remove 'uniqueness' requirement from your non-clustered index if it is tolerable by the business requirements.

  • I didn't dig deep into this but I have seen this pattern many times in the past. Since you say the run time goes up a LOT as total rows in main table increase that says to me you are winding up with table scans to do the updates for your 64M chunks. Switch to a smaller set of rows down to where you get index seeks in the plan and it can be MUCH faster in total to upsert all the incoming rows. I note that my experience fixing problems of this type are with update/insert statements being separate though and NOT with MERGE. If you are on 2005 please note that MERGE is a 1.0 feature there and may well have suboptimal engine code under the covers.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Hi ZL,

    Were you able to solve your issue..?

    I m facing similar issue - MERGE stmt performing very slow on SQL 2008 R2 env.

    The staging table has couple of million rows and Target table has 50~ million rows.

    Most of the rows would result in update (as 99% of it will find a matching key).

    Thanks

    Mahesh

  • Hi,

    Have  you solve the problem yet? I met the same problem and I was wondering how to merge in batches since my code is dynamic merge.  I really want to know how did you solve it.

    Best,

    CW

Viewing 5 posts - 16 through 19 (of 19 total)

You must be logged in to reply to this topic. Login to reply