• mister.magoo (12/19/2012)


    And the nice thing is this method works nicely on varchar(MAX)...

    I used 2000 rows of 2000 pairs of values to test :

    Duration_AvgCpu_AvgReads_AvgWrites_AvgRowCounts_Avg

    106994.000000985614196918 36806 4000000

    Of course, I couldn't compare this to the other methods because they don't handle MAX in their current forms, but I did test them at just 200 rows/200 pairs per row and they were getting very slow.

    I think 107 seconds to split out 4 million rows is quite good though 😀

    Very interesting and thanks for the Brad Schultz link.

    I have a vague recollection of some recent discussion in Jeff's DelimitedSplit8K article that may have also touched on this approach. That discussion thread is pretty huge but it would be in the last 2-3 pages of it I think. Can't recall if it was suggesting to use this for VARCHAR(MAX) or not though. Maybe something else to investigate or maybe even something you'd like to add to the discussion.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St