• mark hutchinson (5/17/2012)


    1. generally, you will find that 500-1000 rows/transaction will give the optimal performance. Beyond that, you won't see much performance improvement.

    Hi Mark and thanks for the reply.

    I would advise against having, or promoting, a set range of rows per transaction as there are far too many variables that play into overall performance. I have seen queries that work great at 80 rows and hang at 100 as well as other queries that are fine at the 5000 - 10,000 range. It all comes down to testing to find what is optimal for a particular situation.

    2. doing all 3 million rows in the transaction adversely impacts server resources and performance.

    Can you be more specific, please? My testing shows that using a streaming TVP for 3 million rows is actually not that impacting to CPU and Memory, so what resource(s) are you thinking of? Sure, the tran log might grow slightly faster/larger but that might be worth the trade-off. Also keep in mind that the 3 million rows is a contrived example to show the disparity between the different mechanisms used to get the data into the DB. If locking a table for 20 seconds is too long and increases blocking, then it can be batched, but that wasn't really the point of the article.

    3. If the upload table is going to be shared, you might also consider doing individual uploads to separate tables (one per user) and then executing an Insert statement from the upload table to the shared table -- cached upload.

    I don't see any particular gain here as the INSERT into the destination table should take the same amount of time either way. But the beauty of the TVP approach, even if not streaming, is that the incoming data can first go into a temporary table (if any massaging of the data needs to occur or if something needs to be indexed) and then into the shared table (possibly in batches), thus eliminating any need for additional schema (i.e. the per-user tables). Otherwise, assuming a more typical case of thousands, tens of thousands, or even several hundred thousand rows to be imported, the operation would likely happen quickly enough to not even need those additional steps. But again, testing is key and it is easy enough to add the intermediary temp table.

    Take care,

    Solomon...

    SQL#https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
    Sql Quantum Lifthttps://SqlQuantumLift.com/ ( company )
    Sql Quantum Leaphttps://SqlQuantumLeap.com/ ( blog )
    Info sitesCollations     •     Module Signing     •     SQLCLR