Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««4445464748»»»

Tally OH! An Improved SQL 8K “CSV Splitter” Function Expand / Collapse
Author
Message
Posted Friday, December 28, 2012 1:52 PM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Sunday, September 29, 2013 1:24 AM
Points: 429, Visits: 1,721
Sean Lange (12/28/2012)
telcogod (12/28/2012)
what about xml?

Did you read the whole article and look at the performance comparisons? This style of xml splitter was on the list and it was not as fast.

As Jeff said above, their is a full test harness posted. Try out your xml splitter next to the DelimitedSplit8K. You might be surprised,

I ran Jeff's test script and DelimitedSplit8K is the clear winner for splitting one-dimensional delimited string arrays. (Unless your system can be configured to allow the use of CLRs--in which case the CLR splitter is the fastest BY FAR.) I even changed his test script slightly to use random alphanumerics in the delimited string and the results were the same.

However, if a second dimension is added to the array ('abc,xyz|def,uvw') things change. I've been doing some testing (using a modified version of Jeff's test methodology) for that form of data using methods proposed for splitting such strings in another thread. Split input string into multicolumn - multirows

The CLR Split is still far superior to any other method. But if the use of a CLR is not possible, the winner of the 2-dimensional array split contest on my machine was an XML splitter offered by "Mr Magoo". DelimitedSplit8K runs a distant second as the size, number, and randomness of the element pairs increases. Most methods attack the 2-dim strings by using joins or a cross apply with regular split functions to do the second level split.

Here's Mr. Magoo's method for splitting element pairs:


-- set up some sample input data
declare @input varchar(max);

set @input = '30;38469|31;38470|32;38471|33;38472|34;38473|35;38474|36;38475|37;38476|38;38477|'

-- convert to xml
-- of course, this could be done in your web app and may be quicker.
declare @xml xml;
SET @xml =
CONVERT(xml,
'<row><col>'+
REPLACE(
REPLACE(
REPLACE(
REPLACE(
@input,
CHAR(13),
''
),
CHAR(10),
''
),
'|',
'</col></row><row><col>'
),
';',
'</col><col>'
)
+'</col></row>'
);

-- and select the values out as rows/columns
select
nd.value('(col/text())[1]','varchar(200)') as field1,
nd.value('(col/text())[2]','varchar(200)') as field2
from @xml.nodes('row') as x(nd)
where not nd.value('(col/text())[1]','varchar(200)') is null


Post #1401051
Posted Friday, December 28, 2012 3:06 PM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Tuesday, April 01, 2014 5:55 PM
Points: 957, Visits: 1,027
Hi Jeff, I managed (I think) to read my way through not only the update to an excellent article but the subsidiary commentary and the code fallout. Now I have some serious reading to do from Paul (always fun, although brains leaking out my ears is not a good look.)

I had to go back through the code though because there were a couple of things I wondered about.

The first is that you do the isnull(nullif()) thing. Yes, I get that it's faster than a case statement (I've used the same construct myself in the past). What I wondered was why you bother doing that at all; why not just another boundary to the end in much the same way as you union on the zero at the beginning?

The second is why do you do the second charindex()? The list you construct has the positions of all of the delimiters in the string, why not just reuse the list to give you the next position as well? The big if here is whether a join to reuse the list is quicker than doing the second charindex() scan; so it may well be you tried this approach and rejected it (although I couldn't see any examples).

I put together a little code just to illustrate the changes. Please note, I am pretty sure that this code didn't originate with me (and obviously it's also based on your own code)! I remember a discussion a few years back on this very site from which I got an earlier version of a string splitter, so all I've done is try to re-cast that code back into a similar shape as your own code.

Anyway, here's the example (just enough code to illustrate the changes, not a full splitter function). I haven't tested the code since I figure you probably already looked at similar changes and rejected them, but if you think it's worthwhile I can go ahead and create a full splitter function from it and run it through your test frame,

declare	@pString varchar(8000) = '1,22,333',
@pDelimiter char(1) = ',';


with
T1(N) as (
select 1 union all select 1 union all select 1 union all
select 1 union all select 1 union all select 1 union all
select 1 union all select 1 union all select 1 union all select 1
),
T2(N) as (
select 1 from T1 as TA cross join T1 as TB
),
T4(N) as (
select 1 from T2 as TA cross join T2 as TB
),
Tally(N) as (
select top (datalength(isnull(@pString, '1')))
row_number() over (order by (select null))
from T4
),
--
-- generate the list of "N+1" positions that mark the start of each
-- item in the string being split (N being the position of the delimiters)
--
BaseItem as (
select 1 as ItemPosition
union all
select N + 1 as ItemPosition
from Tally
where substring(@pString, N, 1) = @pDelimiter
--
-- add terminating boundary to the list of positions
--
union all
select datalength(isnull(@pString, '1')) + 2 -- need N+1 past imaginary final delimiter
),
--
-- order the list of item start positions so we can match each
-- start to the next starting position
--
ItemStart as (
select row_number() over (order by ItemPosition) as ItemNumber,
ItemPosition as ItemStart
from BaseItem as ItemStart
)
select ItemStart.ItemNumber,
ItemStart.ItemStart,
ItemEnd.ItemStart - ItemStart.ItemStart - 1 as ItemLength
from ItemStart
--
-- join back to the same list to get the next item starting position (and
-- avoiding the second charindex()
--
inner join
ItemStart as ItemEnd
on ItemEnd.ItemNumber = ItemStart.ItemNumber + 1;

Edited to add: from the little testing I did do, this appears to be (potentially) much more expensive as the join includes a sort, and the code I posted has an estimated cost of 98% versus 2% for equivalent code from Jeff. So what I'm trying to understand is why a sort which I would think is fairly optimal in SQL Server is so much more expensive than repeated string scans using charindex().
Post #1401076
Posted Friday, December 28, 2012 7:10 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 7:19 AM
Points: 35,956, Visits: 30,245
Bruce W Cassidy (12/28/2012)
Hi Jeff, I managed (I think) to read my way through not only the update to an excellent article but the subsidiary commentary and the code fallout. Now I have some serious reading to do from Paul (always fun, although brains leaking out my ears is not a good look.)

I had to go back through the code though because there were a couple of things I wondered about.

The first is that you do the isnull(nullif()) thing. Yes, I get that it's faster than a case statement (I've used the same construct myself in the past). What I wondered was why you bother doing that at all; why not just another boundary to the end in much the same way as you union on the zero at the beginning?

Hi Bruce,

Thank you for the excellent feedback and suggestions/questions. I hope I can answer them.

I was very interested in your suggestion above. I've not completed even full functional testing on it but it looks very promising. I guess the final proof of the pudding will be to find out if a UNION ALL and an extra DATALENGTH is faster than a number of ISNULL/NULLIFs.


The second is why do you do the second charindex()? The list you construct has the positions of all of the delimiters in the string, why not just reuse the list to give you the next position as well? The big if here is whether a join to reuse the list is quicker than doing the second charindex() scan; so it may well be you tried this approach and rejected it (although I couldn't see any examples).

I put together a little code just to illustrate the changes. Please note, I am pretty sure that this code didn't originate with me (and obviously it's also based on your own code)! I remember a discussion a few years back on this very site from which I got an earlier version of a string splitter, so all I've done is try to re-cast that code back into a similar shape as your own code.

Anyway, here's the example (just enough code to illustrate the changes, not a full splitter function). I haven't tested the code since I figure you probably already looked at similar changes and rejected them, but if you think it's worthwhile I can go ahead and create a full splitter function from it and run it through your test frame,

declare	@pString varchar(8000) = '1,22,333',
@pDelimiter char(1) = ',';


with
T1(N) as (
select 1 union all select 1 union all select 1 union all
select 1 union all select 1 union all select 1 union all
select 1 union all select 1 union all select 1 union all select 1
),
T2(N) as (
select 1 from T1 as TA cross join T1 as TB
),
T4(N) as (
select 1 from T2 as TA cross join T2 as TB
),
Tally(N) as (
select top (datalength(isnull(@pString, '1')))
row_number() over (order by (select null))
from T4
),
--
-- generate the list of "N+1" positions that mark the start of each
-- item in the string being split (N being the position of the delimiters)
--
BaseItem as (
select 1 as ItemPosition
union all
select N + 1 as ItemPosition
from Tally
where substring(@pString, N, 1) = @pDelimiter
--
-- add terminating boundary to the list of positions
--
union all
select datalength(isnull(@pString, '1')) + 2 -- need N+1 past imaginary final delimiter
),
--
-- order the list of item start positions so we can match each
-- start to the next starting position
--
ItemStart as (
select row_number() over (order by ItemPosition) as ItemNumber,
ItemPosition as ItemStart
from BaseItem as ItemStart
)
select ItemStart.ItemNumber,
ItemStart.ItemStart,
ItemEnd.ItemStart - ItemStart.ItemStart - 1 as ItemLength
from ItemStart
--
-- join back to the same list to get the next item starting position (and
-- avoiding the second charindex()
--
inner join
ItemStart as ItemEnd
on ItemEnd.ItemNumber = ItemStart.ItemNumber + 1;

Edited to add: from the little testing I did do, this appears to be (potentially) much more expensive as the join includes a sort, and the code I posted has an estimated cost of 98% versus 2% for equivalent code from Jeff. So what I'm trying to understand is why a sort which I would think is fairly optimal in SQL Server is so much more expensive than repeated string scans using charindex().


I guess the answer to that question can actually be found in a comment you made in your good code...
-- join back to the same list to get the next item starting position (and
-- avoiding the second charindex()

If you look at the execution plan, it doesn't actually use the "same list". CTEs have some wonderful advantages but they also have a hidden fault. Instead of using the same internal result set from a CTE when things like self joins are used, the entire CTE is re-executed. That immediately makes generation of the cteTally twice as slow. Then comes the join. On my 2005 box, that manifests itself as a Hash Join that adds 60% more overhead to something that already takes twice as long to execute. Forcing a Merge Join makes the Execution Plan look better but it actually takes more CPU and duration. Forcing a Loop Join just absolutely kills the performance.

On the other hand, using CHARINDEX() to find the next delimiter after the current one is a machine language level function that operates as quickly as a CLR you could write to do the same thing.

I probably should have mentioned that I had already tried all that in the article but the article was already almost too long as it was.

Again, thank your for the thoughtfull feedback, Bruce. I really appreciate it. I will try that first suggestion and let folks know. It may take me a bit because I'm up to my eyes in it at my job. Gotta love year end.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1401104
Posted Friday, December 28, 2012 7:41 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 7:19 AM
Points: 35,956, Visits: 30,245
Steven Willis (12/28/2012)
Sean Lange (12/28/2012)
telcogod (12/28/2012)
what about xml?

Did you read the whole article and look at the performance comparisons? This style of xml splitter was on the list and it was not as fast.

As Jeff said above, their is a full test harness posted. Try out your xml splitter next to the DelimitedSplit8K. You might be surprised,

I ran Jeff's test script and DelimitedSplit8K is the clear winner for splitting one-dimensional delimited string arrays. (Unless your system can be configured to allow the use of CLRs--in which case the CLR splitter is the fastest BY FAR.) I even changed his test script slightly to use random alphanumerics in the delimited string and the results were the same.

However, if a second dimension is added to the array ('abc,xyz|def,uvw') things change. I've been doing some testing (using a modified version of Jeff's test methodology) for that form of data using methods proposed for splitting such strings in another thread. Split input string into multicolumn - multirows

The CLR Split is still far superior to any other method. But if the use of a CLR is not possible, the winner of the 2-dimensional array split contest on my machine was an XML splitter offered by "Mr Magoo". DelimitedSplit8K runs a distant second as the size, number, and randomness of the element pairs increases. Most methods attack the 2-dim strings by using joins or a cross apply with regular split functions to do the second level split.

Here's Mr. Magoo's method for splitting element pairs:


-- set up some sample input data
declare @input varchar(max);

set @input = '30;38469|31;38470|32;38471|33;38472|34;38473|35;38474|36;38475|37;38476|38;38477|'

-- convert to xml
-- of course, this could be done in your web app and may be quicker.
declare @xml xml;
SET @xml =
CONVERT(xml,
'<row><col>'+
REPLACE(
REPLACE(
REPLACE(
REPLACE(
@input,
CHAR(13),
''
),
CHAR(10),
''
),
'|',
'</col></row><row><col>'
),
';',
'</col><col>'
)
+'</col></row>'
);

-- and select the values out as rows/columns
select
nd.value('(col/text())[1]','varchar(200)') as field1,
nd.value('(col/text())[2]','varchar(200)') as field2
from @xml.nodes('row') as x(nd)
where not nd.value('(col/text())[1]','varchar(200)') is null




You've actually answered your own question. As you pointed out, DelimitedSplit8K is nasty fast in doing what it was designed for which is Splitting non-blob varchars with an unknown number of delimiters/elements on a single delimiter. Only a well written CLR (like Paul White's) is faster at solvig the same problem directly. If you "cascade" the function using more than one Cross Apply, you have to remember that ALL of the code becomes a single gigantic query which doesn't run as effeciently as single usage of the function.

Shifting gears, Maggo's code, as fine as it is for the specific task given, can also only do one thing well. If you do something to the data even as simple as adding one extra element to the semi-colon delimited data, you will need to modify the code to accomodate the change. That's NOT a bad thing if the data is expected to not change on a regular basis. It just proves that well written code dedicated to solving a dedicated problem will be faster than generic code applied to the same dedicated data.

I've not had a lot of "deep dive" time available recently but the problem of multiple dimension (usually 2 dimensions as in your example but with unknown numbers of elements in each dimension) splitting has come up often enough where I should probably spend some time on the problem. Still, and as you've pointed out, something like Paul White's CLR works just fine for such a thing because it's faster to begin with and treats the "work load" as two separate queries instead of one massive one.

Heh... now that I've said that, I wonder if dumping the first split into a Temp Table and them resplitting that would be faster than the XML method?


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1401108
Posted Friday, December 28, 2012 9:49 PM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Tuesday, April 01, 2014 5:55 PM
Points: 957, Visits: 1,027
Jeff Moden (12/28/2012)
I guess the final proof of the pudding will be to find out if a UNION ALL and an extra DATALENGTH is faster than a number of ISNULL/NULLIFs.

Yeah, I wondered at that myself. I wondered whether having a CTE that evaluates the datalength once and then cross joined where it's needed would help, but then you mentioned that the CTE can get evaluated more than once, so in effect that wouldn't resolve it. Does SQL Server do common factor elimination? In which case it might only evaluate the datalength once anyway.

By the way, an alternative to isnull(nullif()) is to use the sign of an expression (which in your case would evaluate as either 1 or 0) and multiply sub-expressions by either the sign or one minus the sign and then add the two sub-expressions together. Your isnull(nullif()) combination is way clearer (and probably a lot quicker as well) I think.

Jeff Moden (12/28/2012)
If you look at the execution plan, it doesn't actually use the "same list". CTEs have some wonderful advantages but they also have a hidden fault. Instead of using the same internal result set from a CTE when things like self joins are used, the entire CTE is re-executed.

That's something I didn't know, and it explains a lot! Thanks Jeff!

I guess in some cases, if SQL Server does apply common factor elimination, then that may drop out, but there will be a limit on what it will factor out.

Jeff Moden (12/28/2012)
I probably should have mentioned that I had already tried all that in the article but the article was already almost too long as it was.

As I mentioned earlier, I was pretty sure the code I had been using (which did the self join thing) came from an earlier article on the same topic, so I suspected you had looked at it and ruled it out. That's why I was hoping to find out why that solution wasn't what I would call "optimal".

Cheers, and thanks again Jeff. Job hunting over here (in between contracts) so I've got time to look at stuff like this. :D
Post #1401118
Posted Saturday, December 29, 2012 10:02 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 7:19 AM
Points: 35,956, Visits: 30,245
Bruce W Cassidy (12/28/2012)
Jeff Moden (12/28/2012)
If you look at the execution plan, it doesn't actually use the "same list". CTEs have some wonderful advantages but they also have a hidden fault. Instead of using the same internal result set from a CTE when things like self joins are used, the entire CTE is re-executed.

That's something I didn't know, and it explains a lot! Thanks Jeff!

I guess in some cases, if SQL Server does apply common factor elimination, then that may drop out, but there will be a limit on what it will factor out.


I've not done a deep dive on it but I believe that any common factor eliminations would only be within each "instance" of the CTE. Each "call" to the CTE causes everything inside the CTE to be recalculated. I do which that MS would have been a bit smarter about that. They shot a really great feature right in the mouth. Same goes for multiple calls to a view within the same query... each "call" is evaluated as a separate call and the view code is executed once for each alias of the view.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1401182
Posted Saturday, December 29, 2012 10:05 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 7:19 AM
Points: 35,956, Visits: 30,245
buyme43 (12/28/2012)
input this URL:

***Url removed ***
you can find many cheap and high stuff

Believe you will love it.

WE ACCEPT CREDIT CARD /WESTERN UNION PAYMENT
YOU MUST NOT MISS IT!!!

Speaking of "cheap", SPAM reported.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1401184
Posted Saturday, December 29, 2012 11:27 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: 2 days ago @ 8:06 AM
Points: 40, Visits: 327
@Jeff

Did any of your TallyTable approaches use this simplified approach?
content until the first delimiter
UNION ALL
tallytable split rows until the last delimiter
UNION ALL
content after the last delimiter




Post #1401195
Posted Saturday, December 29, 2012 11:48 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Sunday, September 29, 2013 1:24 AM
Points: 429, Visits: 1,721
Jeff Moden (12/28/2012)Heh... now that I've said that, I wonder if dumping the first split into a Temp Table and them resplitting that would be faster than the XML method?

I modified DelimitedSplit8K by simply splitting on the "outer" delimiter then using a simple replace to get the second half of each pair.

I wasn't expecting much but my test results show that it beat the XML Split method and even the CLR split until the size and number of the value pairs got very large. The CLR Split must still use a cross apply when splitting pairs, so if the CLR function was re-written to parse both pairs within the function itself it would probably still be the fastest method.

All of the testing code (modified from Jeff Moden's code in this article) is posted at Split input string into multicolumn - multirows

The code for using DelimitedSplit8K to split value pairs (i.e., 2-dimensional arrays):


CREATE FUNCTION [dbo].[DelimitedSplit8K_2DIM]
(
@pString VARCHAR(8000)
,@pDelimiter1 CHAR(1)
,@pDelimiter2 CHAR(1) = NULL
)
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (
SELECT 0 UNION ALL
SELECT TOP (DATALENGTH(ISNULL(@pString,1))) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (
SELECT t.N+1
FROM cteTally t
WHERE (SUBSTRING(@pString,t.N,1) = @pDelimiter1 OR t.N = 0)
)
SELECT
ItemNumber
,Item1
,Item2 = REPLACE(Item2,Item1+@pDelimiter2,'')
FROM
(
SELECT
ItemNumber = ROW_NUMBER() OVER(ORDER BY s.N1)
,Item1 = SUBSTRING(@pString,s.N1,ISNULL(NULLIF(CHARINDEX(@pDelimiter2,@pString,s.N1),0)-s.N1,8000))
,Item2 = SUBSTRING(@pString,s.N1,ISNULL(NULLIF(CHARINDEX(@pDelimiter1,@pString,s.N1),0)-s.N1,8000))
FROM cteStart s
) i1


/*
Example usage:

DECLARE @str VARCHAR(8000)
SET @str = '123,456|789,000'

SELECT * FROM [dbo].[DelimitedSplit8K_2DIM](@str,'|',',')

DECLARE @str VARCHAR(8000)
SET @str = '0WQDNw|aXxbzu,durPP|7yaFpK,UMERA5|FLN2G,HUdZv|QUQy5,3MbdqS|JWUgPp,F23jqp|kWbSBn,nSWunU|uh1zR,pqBJ4U|eNnZzE,jbu7R|cwd4E,1hNMC|Ru7ar'

SELECT * FROM [dbo].[DelimitedSplit8K_2DIM](@str,',','|')

*/


Post #1401198
Posted Saturday, December 29, 2012 2:33 PM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Yesterday @ 8:01 AM
Points: 1,651, Visits: 5,201
Steven Willis (12/28/2012)

Here's Mr. Magoo's method for splitting element pairs:



Gosh, I hope that was a mistake?
That code was not the best version of the XML split for that problem by a long way

I won't clutter up this thread with the better version - it is in the other thread you linked to, and as Jeff has pointed out, it was for a specific problem, not a generic splitter.

Otherwise, thanks for the nod


MM


  • MMGrid Addin
  • MMNose Addin


  • Forum Etiquette: How to post Reporting Services problems
  • Forum Etiquette: How to post data/code on a forum to get the best help - by Jeff Moden
  • How to Post Performance Problems - by Gail Shaw

  • Post #1401204
    « Prev Topic | Next Topic »

    Add to briefcase «««4445464748»»»

    Permissions Expand / Collapse