Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12345»»»

Convert CSV values in three columns to rows Expand / Collapse
Author
Message
Posted Tuesday, June 29, 2010 3:38 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Friday, December 31, 2010 9:46 AM
Points: 274, Visits: 473
So, the inline CTE has better performance than a tally table? Interesting. Sometimes SQL's performance traits are a bit quirky.

Scott Pletcher, SQL Server MVP 2008-2010
Post #945076
Posted Tuesday, June 29, 2010 5:56 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Wednesday, December 4, 2013 1:49 PM
Points: 1,104, Visits: 1,174
WayneS (6/29/2010)
2. I see you changed the input parameter from varchar(8000) to varchar(7999), but I don't see a reason for that in your notes. Would you elaborate on this please?

I think I can answer that one ...

He does a @pString + @pDelimiter to search for the @pDelimiter (to guarantee it will be found) and that total searched expression is limited to 8000 characters.

I read the other thread where Paul was discussing his findings. I was talking to someone at work about it and would like to find it again if anyone happens to know the link.

*edit: Nevermind about the link, found it here.


└> bt


Forum Etiquette: How to post data/code on a forum to get the best help
Post #945124
Posted Tuesday, June 29, 2010 6:00 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 12:03 AM
Points: 37,065, Visits: 31,628
WayneS (6/29/2010)
Sigh... I sure wish we could put into the briefcase an individual post on the forum... This one of Jeff's certainly would be in it.

@Jeff -
1. Did you test after adding the WITH SCHEMABINDING? If not, I see a test in my near future.
2. I see you changed the input parameter from varchar(8000) to varchar(7999), but I don't see a reason for that in your notes. Would you elaborate on this please?
Edit:
3. Is it safe to assume that a properly configured (Clustered index, 100% FillFactor) real tally table would perform better than the inline one (the comment Scott made below brings this up)?


1. I didn't test this one specifically for the WITH SCHEMABINDING because every other test I ran with schemabinding showed either no change or a minor improvement and I'm also being a bit lazy... I wanted to post the function and didn't have the time to do the normal million row testing on this specific function. I did so without much fear because the tests on other functions show that WITH SCHEMABINDING is one of those things you can pretty much take to the bank. You know me... I'll get to retesting it soon, though.

2. I changed the input parameter to 7999 as a clue to folks that you have to save space to include at least 1 extra delimiter in the code (@pDelimiter + @pString and @pString + @pDelimiter). I guess I should spell out the reason in note 1.

3. Again, I've not specifically tested for this specific function. In all other functions that I've test, both the Tally Table and the Tally CTE perform equally well although the Tally CTE does it virtually without reads. Again, I'll be doing more testing especially since I made the WITH SCHEMABINDING change.

As far as a brief case goes... once I get done retesting the function, I aim to include it in an update on the Tally Table article.



--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #945127
Posted Tuesday, June 29, 2010 6:03 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 12:03 AM
Points: 37,065, Visits: 31,628
scott.pletcher (6/29/2010)
So, the inline CTE has better performance than a tally table? Interesting. Sometimes SQL's performance traits are a bit quirky.


In other functions, I've not found that to be true. The Tally Table normally squeaks past the CTE for duration but the CTE gens virtually no reads. I guess it depends on how you want to impact the system. When I get around to the Tally Table article rewrite, I'll include million row testing for both methods.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #945129
Posted Tuesday, July 6, 2010 5:54 PM
SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Thursday, September 4, 2014 1:28 PM
Points: 1,676, Visits: 1,756
Jeff,

I would like to ask your opinion on the version of the split I recently wrote. It is implemented as a stored proc and does not have any limitation on the size of the string to split. Performance tests that I ran show that feeding text consisting of 10,000 sentences (dot delimited) to it (about 500,000 characters in total or 1 MB in datalength) takes about 0.3 seconds to execute, and expanding the size 10-fold to 100,000 sentences (5 mln characters in total or 10 MB in datalength) increases the execution time also about 10-fold to a total of 3 seconds. This is on a 2-year old desktop with Windows XP SP3, 2 GB of RAM duo-core CPU. The idea is to utilize the engine's ability to process xml:

create proc dbo.usp_DelimitedSplit
(
@text nvarchar(max),
@delimiter char(1),
@entitize bit = 1
)
as

begin

declare @xml xml;

if @entitize = 1 set @text = (select @text for xml path(''));
set @xml = '<r>' + replace(@text, @delimiter, '</r><r>') + '</r>';

select
row_number() over (order by (select null)) item_number,
item.value('text()[1]', 'varchar(max)') item_value
from @xml.nodes('//r') R(item);

end;
go

As you can see, the procedure is very simple. The optional @entitize parameter deserves an explanation though. It is there to ensure that if the text contains some "xml unfriendly" characters then these are entitized by default. However, it leaves the option to the calling code to pass 0 as @entitize value should the caller be aware that the text has already undergone entitizing treatment prior to the procedure execution. For example,

declare @delimiter char(1);
declare @text nvarchar(max);

select @delimiter = '.', @text = 'a<x>&z.b.c';

exec dbo.usp_DelimitedSplit @text, @delimiter;

automatically handles unfriendly characters returning

item_number          item_value
-------------------------------
1 a<x>&z
2 b
3 c

Here is the test I used to start from scratch, pad the stirng with 10,000 sentences and then call the procedure to split it. The number 10000 is hard-coded, but can be changed to whatever arbitrary number:

declare @delimiter char(1);
declare @text nvarchar(max);

set @delimiter = '.';

-- populate @text like this
-- "This is a<&>/><x>& part of the text to split into rows_000001."
-- "This is a<&>/><x>& part of the text to split into rows_000002." etc
set
@text = stuff(cast(
(
select top 10000
@delimiter + 'This is a<&>/><x>& part of the text to split into rows_' +
replace(str(row_number() over(order by a.[object_id]), 6), ' ', '0')
from sys.objects a cross join sys.objects b
for xml path('')
) as varchar(max)), 1, 1, '');

-- because the @text happens to be already entitized,
-- pass 0 as @entitize parameter value
exec dbo.usp_DelimitedSplit @text, @delimiter, 0;

The above returns:

item_number          item_value
----------------------------------------------------------------------------------
1 This is a<&>/><x>& part of the text to split into rows_000001
2 This is a<&>/><x>& part of the text to split into rows_000002
/* ... abridged ...*/
10000 This is a<&>/><x>& part of the text to split into rows_010000

Thanks,

Oleg
Post #948247
Posted Tuesday, July 6, 2010 9:24 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 12:03 AM
Points: 37,065, Visits: 31,628
Thank you for the test harness and your good code, Oleg. I'll check it out.

--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #948274
Posted Friday, July 9, 2010 7:29 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 12:03 AM
Points: 37,065, Visits: 31,628
Jeff Moden (7/6/2010)
Thank you for the test harness and your good code, Oleg. I'll check it out.


Sorry, Oleg... I've not nbeen able to put in the time to do a deep dive on the code and this certainly deserves int. I'm not going to be able to get to it for at least another 10 days because I'm going on vacation where I won't have access to a computer with SQL Server on it.

--Jeff Moden


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #950285
Posted Thursday, August 26, 2010 5:27 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 12:03 AM
Points: 37,065, Visits: 31,628
The initial testing I just got done doing looks pretty darned good on a parameter of 100,000 elements, Oleg. I'm still playing with it.

--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #976039
Posted Friday, August 27, 2010 1:33 AM


SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Thursday, March 27, 2014 12:19 AM
Points: 177, Visits: 456
Jeff Moden (6/29/2010)
WayneS (6/28/2010)
lmu92 (6/28/2010)
This thread include a post by WayneS showing the DelimitedSplit8K function I'm using at the moment.


I believe I saw a follow-up post from Jeff where he found that the UNPIVOT (as demonstrated in the above link) turns out to be slower when he ran it on his work prod server several times. Jeff, please let us know if I remember this correctly.


You did, indeed. I didn't take the time to analyze "why" but on certain machines with multiple processors, the UNPIVOT method sometimes runs substantially slower. I also didn't understand that the function you good folks were talking about was the function that I posted. Here's the latest and greatest with all the documentation and optimizations that I currently use for production code... the documentation in the header is quite substantial. {EDIT} Updated the code below to include the lastest performance (From yesterday) thanks to Paul White.

 CREATE FUNCTION dbo.DelimitedSplit8K
/***************************************************************************************************
Purpose:
Split a given string at a given delimiter and return a list of the split elements (items).

Returns:
iTVF containing the following:
ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)
Item = Element value as a VARCHAR(8000)

CROSS APPLY Usage Example:
---------------------------------------------------------------------------------------------------
--===== Conditionally drop the test tables to make reruns easier for testing.
-- (this is NOT a part of the solution)
IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL
DROP TABLE #JBMTest
;
--===== Create and populate a test table on the fly (this is NOT a part of the solution).
SELECT *
INTO #JBMTest
FROM (
SELECT 1,'1,10,100,1000,10000,100000,1000000' UNION ALL
SELECT 2,'2000000,200000,20000,2000,200,20,2' UNION ALL
SELECT 3, 'This,is,a,test' UNION ALL
SELECT 4, 'and so is this' UNION ALL
SELECT 5, 'This, too (no pun intended)'
) d (SomeID,SomeValue)
;
GO
--===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)
SELECT test.SomeID, split.ItemNumber, split.Item
FROM #JBMTest test
CROSS APPLY
(
SELECT ItemNumber, Item
FROM dbo.DelimitedSplit8k(test.SomeValue,',')
) split
;
---------------------------------------------------------------------------------------------------
Notes:
1. Optimized for VARCHAR(7999) or less. No testing or error reporting for truncation at 7999
characters is done.
2. Optimized for single character delimiter. Multi-character delimiters should be resolved
externally from this function.
3. Optimized for use with CROSS APPLY.
4. Does not "trim" elements just in case leading or trailing blanks are intended.
5. If you don't know how a Tally table can be used to replace loops, please see the following...
http://www.sqlservercentral.com/articles/T-SQL/62867/
6. Changing this function to use VARCHAR(MAX) will cause it to run twice as slow. It's just the
nature of VARCHAR(MAX) whether it fits in-row or not.
7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows
that the UNPIVOT method is quite machine dependent and can slow things down quite a bit.
8. Performance testing shows using "TOP" for the limiting criteria of "N" is actually
slower and slightly more CPU intensive than the traditional WHERE N < LEN(@pString) + 2.
9. Performance testing shows using ORDER BY (SELECT x) where "x" is anything is actually
slower and slightly more CPU intensive than the traditional ORDER BY (SELECT N).

Credits:
This code is the product of many people's efforts including but not limited to the following:
cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a
bit of extra speed and finally redacted by Jeff Moden for a different slant on readability and
compactness. Hat's off to Paul White for his simple explanations of CROSS APPLY. Finally,
special thanks to Erland Sommarskog for his tireless efforts to help people understand
what you can actually do with T-SQL. I also thank whoever wrote the first article I ever saw
on "numbers tables" which is located at the following URL ...
http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html

Revision History:
Rev 00 - 20 Jan 2010 - Concept: Lynn Pettis and others.
Redaction/Implementation: Jeff Moden
- Base 10 redaction and reduction for CTE. (Total rewrite)

Rev 01 - 13 Mar 2010 - Jeff Moden
- Removed one additional concatenation and one subtraction from the SUBSTRING in the
SELECT List for that tiny bit of extra speed.

Rev 02 - 14 Apr 2010 - Jeff Moden
- No code changes. Added CROSS APPLY usage example to the header, some additional credits,
and extra documentation.

Rev 03 - 18 Apr 2010 - Jeff Moden
- No code changes. Added notes 7, 8, and 9 about certain "optimizations" that dont'
actually work for this type of function.

Rev 04 - 29 Jun 2010 - Jeff Moden
- Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary
"Table Spool" when the function is used in an UPDATE statement even though the function
makes no external references.
***************************************************************************************************/
--===== Define I/O parameters
(
@pString VARCHAR(7999),
@pDelimiter CHAR(1)
)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
--===== "Inline" CTE Driven "Tally Table” produces values up to
-- 10,000... enough to cover VARCHAR(8000)
WITH
E1(N) AS ( --=== Create Ten 1's
SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 --10
),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --100
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10,000
cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT N)) FROM E4)
--===== Do the split
SELECT ROW_NUMBER() OVER (ORDER BY N) AS ItemNumber,
SUBSTRING(@pString, N, CHARINDEX(@pDelimiter, @pString + @pDelimiter, N) - N) AS Item
FROM cteTally
WHERE N < LEN(@pString) + 2
AND SUBSTRING(@pDelimiter + @pString, N, 1) = @pDelimiter
;
GO





Cool! thanks, I needed this... uhmmm, I have a question about this but in a different forum/section (don't wanna double post)...

http://www.sqlservercentral.com/Forums/Topic975437-392-1.aspx?Update=1

hope you can look at it.
Thanks in advance!


_____________________________________________
Quatrei Quorizawa
:):D:P;)
MABUHAY PHILIPPINES!

"Press any key...
Where the heck is the any key?
hmmm... Let's see... there's ESC, CTRL, Page Up...
but no any key"
- Homer Simpson
Post #976203
Posted Friday, August 27, 2010 6:53 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 12:03 AM
Points: 37,065, Visits: 31,628
Quatrei.X (8/27/2010)[hrCool! thanks, I needed this... uhmmm, I have a question about this but in a different forum/section (don't wanna double post)...

http://www.sqlservercentral.com/Forums/Topic975437-392-1.aspx?Update=1

hope you can look at it.
Thanks in advance!


So how'd I do over there? Did I satisfy your questions?


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #976855
« Prev Topic | Next Topic »

Add to briefcase ««12345»»»

Permissions Expand / Collapse