Paul asks, @steve-2 McNamee: Are you saying that the technique used in the article is faster than FOR XML PATH for fewer than 500 elements? Not nit-picking; I just want to be clear on it. For me, the greater problem is that using variables in this way lacks an ORDER guarantee. Incorrect results, even when produced fractionally more quickly, are rarely to be desired
No I'm not saying that; forgive my offhand remark. To clarify, about 6 months ago (before reading the great threads here about shredding), I was helping test one of my colleage's CSV-shredding CLR functions. I used a variant of this topic's technique to generate test data. When I lazily included the CSV generating code within the timing of the CLR, the results caused me to doubt the effectiveness of the CLR. I finally figured out that this technique, while handy and cool, can take a looong time to generate large CSV strings. The number 500 sticks in my head, but I don't have any evidence to back that up. For sure, when I got up to about 50,000 elements, the technique would not even return results before I got tired of waiting. That's when I found the XML technique to generate CSV strings, which did not seem to suffer from the same performance limitation.
From an email I sent at the time:
Last week, I thought I had found a performance limit in the CLR parsing function when parsing strings larger than about 30k elements. I was wrong about that. The offending code was the TSQL code I used the create the csv list that I then parsed using the function. With a @CSVList list of 515847 members, a select count(*) from the CLR function takes about a second.