Convert CSV values in three columns to rows

  • Hello,

    I have a table with three columns containing CSV values. I need to normalize them. The order of the CSV values is meaningful (explained more below) Here is the sample DDL:

    CREATE TABLE #t (id int, kpi1 varchar(16), kpi2 varchar(16), kpi3 varchar(16))

    insert into #t (id, kpi1, kpi2, kpi3)

    values(1, '0.1,0.11,0.111', '0.2,0.22,0.222', '0.3,0.33,0.333')

    insert into #t (id, kpi1, kpi2, kpi3)

    values(2, '0.4,0.44,0.444', '0.5,0.55,0.555', '0.6,0.66,0.666')

    select *

    from #t

    I do have a function which converts CSV values into rows (I think I got it from this site). Here is the definition:

    CREATE FUNCTION dbo.fnETL_Split_Set_CSV_To_Table (

    @idint,

    @String varchar(8000),

    @Delimiter char(1)

    )

    returns @temptable TABLE (id int, items varchar(128))

    as

    begin

    declare @idx int

    declare @slice varchar(8000)

    select @idx = 1

    if len(@String)<1 or @String is null return

    while @idx!= 0

    begin

    set @idx = charindex(@Delimiter,@String)

    if @idx!=0

    set @slice = left(@String,@idx - 1)

    else

    set @slice = @String

    if(len(@slice)>0)

    insert into @temptable(id, Items) values(@id, @slice)

    set @String = right(@String,len(@String) - @idx)

    if len(@String) = 0 break

    end

    return

    end

    I tried using CROSS APPLY first and it works fine for one column:

    select t.id, t.kpi1, t.kpi2, t.kpi3, ca1.items as items1

    from

    #t as t

    cross apply

    [dbo].[fnETL_Split_Set_CSV_To_Table] (t.id, t.kpi1, ',') as ca1

    However, as soon as I try to add a second column to the mix, I get a cross-join, which is incorrect:

    select t.id, t.kpi1, t.kpi2, t.kpi3, ca1.items as items1 , ca2.items as items2

    from

    #t as t

    cross apply

    [dbo].[fnETL_Split_Set_CSV_To_Table] (t.id, t.kpi1, ',') as ca1

    cross apply

    [dbo].[fnETL_Split_Set_CSV_To_Table] (t.id, t.kpi2, ',') as ca2

    My desired result would look something like this:

    id kpi1 kpi2 kpi3 items1 items2 items3

    ----------- ---------------- ---------------- ---------------- ---------- -----------

    1 0.1,0.11,0.111 0.2,0.22,0.222 0.3,0.33,0.333 0.1 0.20.3

    1 0.1,0.11,0.111 0.2,0.22,0.222 0.3,0.33,0.333 0.11 0.220.33

    1 0.1,0.11,0.111 0.2,0.22,0.222 0.3,0.33,0.333 0.111 0.2220.333

    2 0.4,0.44,0.444 0.5,0.55,0.555 0.6,0.66,0.666 0.4 0.50.6

    2 0.4,0.44,0.444 0.5,0.55,0.555 0.6,0.66,0.666 0.44 0.550.66

    2 0.4,0.44,0.444 0.5,0.55,0.555 0.6,0.66,0.666 0.444 0.5550.666

    I hope this is clear. Basically I want to split CSV values in each of the columns without multiplying the number of records every time I do the split, but just to append them to the same id. Once again, the order of CSV values is meaningful for the subsequent manipulations. For example: 0.1, 0.2, 0.3 (first values from kpi1, kpi2, kpi3) form a meaningful set of values, that's why I want them to be in the first row. The next row contains second values from each of the kpi's: 0.11, 0.22, 0.33. Since I do have an id in my source table and the function returns this id, I thought I could do a JOIN instead of CROSS APPLY, but this gives me a syntax error:

    select t.id, t.kpi1, t.kpi2, t.kpi3, ca1.items as items1

    from

    #t as t

    left join

    [dbo].[fnETL_Split_Set_CSV_To_Table] (t.id, t.kpi1, ',') as ca1

    on

    t.id = ca1.id

    Apparently this is invalid, because I get the error back:

    Msg 4104, Level 16, State 1, Line 5

    The multi-part identifier "t.id" could not be bound.

    Msg 4104, Level 16, State 1, Line 5

    The multi-part identifier "t.kpi1" could not be bound.

    I appreciate any pointers in the right direction.

    Thank you!

  • Here's how I'd do it:

    Step 1: UNPIVOT the data to get a more normalized data structure.

    Step 2: Apply the string split function (side note: you might want to either search this site for "DelimitedSplit8K" or have a look at the TallyTable article referenced in my signature to get a better performing split function than you currently have...)

    Step 3: get the final result using the CrossTab approach (see the related link in my signature for details).

    Edit: Thank you for providing ready to use sample data! Made it really easy to work on! Excellent job!!!!

    ;

    WITH cte AS

    (

    SELECT id,col,val

    FROM

    (SELECT id, kpi1,kpi2,kpi3

    FROM #t) p

    UNPIVOT

    (val FOR col IN

    (kpi1,kpi2,kpi3)

    )AS unpvt

    ), cte2 AS

    (

    SELECT id,col,val, item,itemnumber

    FROM cte

    CROSS APPLY ( SELECT *

    FROM dbo.DelimitedSplit8K(val, ',')) z

    )

    SELECT

    id,

    MAX(CASE WHEN col='kpi1' THEN val ELSE NULL END) AS kpi1,

    MAX(CASE WHEN col='kpi2' THEN val ELSE NULL END) AS kpi2,

    MAX(CASE WHEN col='kpi3' THEN val ELSE NULL END) AS kpi3,

    MAX(CASE WHEN col='kpi1' THEN item ELSE NULL END) AS items1 ,

    MAX(CASE WHEN col='kpi2' THEN item ELSE NULL END) AS items2 ,

    MAX(CASE WHEN col='kpi3' THEN item ELSE NULL END) AS items3

    FROM cte2

    GROUP BY id,itemnumber

    ORDER BY id,itemnumber

    /* result set

    idkpi1kpi2kpi3items1items2items3

    10.1,0.11,0.1110.2,0.22,0.2220.3,0.33,0.3330.10.20.3

    10.1,0.11,0.1110.2,0.22,0.2220.3,0.33,0.3330.110.220.33

    10.1,0.11,0.1110.2,0.22,0.2220.3,0.33,0.3330.1110.2220.333

    20.4,0.44,0.4440.5,0.55,0.5550.6,0.66,0.6660.40.50.6

    20.4,0.44,0.4440.5,0.55,0.5550.6,0.66,0.6660.440.550.66

    20.4,0.44,0.4440.5,0.55,0.5550.6,0.66,0.6660.4440.5550.666

    */



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • Thank you very much, lmu92 ! Great solution and exactly what I need.

    As a side note I did a search on this site for "DelimitedSplit8K", but I can't seem to find it. However I do understand that the one I listed above is inefficient. Below is the alternative, is it similar to yours?

    CREATE FUNCTION [dbo].[fnSplit]

    (@list VARCHAR(8000),

    @delim CHAR(1) = ','

    ) RETURNS TABLE AS

    RETURN

    WITH csvtbl(START, stop) AS (

    SELECT START = 1,

    stop = CHARINDEX(@delim, @list + @delim)

    UNION ALL

    SELECT START = stop + 1,

    stop = CHARINDEX(@delim,

    @list + @delim, stop + 1)

    FROM csvtbl

    WHERE stop > 0

    )

    SELECT row_number() over (order by Start) as ItemID,

    LTRIM(RTRIM(SUBSTRING(@list, START, CASE WHEN stop > 0 THEN stop - START ELSE 0 END)))

    AS ItemValue

    FROM csvtbl

    WHERE stop > 0

    GO

  • mishaluba (6/27/2010)


    Thank you very much, lmu92 ! Great solution and exactly what I need.

    As a side note I did a search on this site for "DelimitedSplit8K", but I can't seem to find it. However I do understand that the one I listed above is inefficient. Below is the alternative, is it similar to yours?

    CREATE FUNCTION [dbo].[fnSplit]

    (@list VARCHAR(8000),

    @delim CHAR(1) = ','

    ) RETURNS TABLE AS

    RETURN

    WITH csvtbl(START, stop) AS (

    SELECT START = 1,

    stop = CHARINDEX(@delim, @list + @delim)

    UNION ALL

    SELECT START = stop + 1,

    stop = CHARINDEX(@delim,

    @list + @delim, stop + 1)

    FROM csvtbl

    WHERE stop > 0

    )

    SELECT row_number() over (order by Start) as ItemID,

    LTRIM(RTRIM(SUBSTRING(@list, START, CASE WHEN stop > 0 THEN stop - START ELSE 0 END)))

    AS ItemValue

    FROM csvtbl

    WHERE stop > 0

    GO

    mishaluba (6/27/2010)


    Thank you very much, lmu92 ! Great solution and exactly what I need.

    As a side note I did a search on this site for "DelimitedSplit8K", but I can't seem to find it. However I do understand that the one I listed above is inefficient. Below is the alternative, is it similar to yours?

    CREATE FUNCTION [dbo].[fnSplit]

    (@list VARCHAR(8000),

    @delim CHAR(1) = ','

    ) RETURNS TABLE AS

    RETURN

    WITH csvtbl(START, stop) AS (

    SELECT START = 1,

    stop = CHARINDEX(@delim, @list + @delim)

    UNION ALL

    SELECT START = stop + 1,

    stop = CHARINDEX(@delim,

    @list + @delim, stop + 1)

    FROM csvtbl

    WHERE stop > 0

    )

    SELECT row_number() over (order by Start) as ItemID,

    LTRIM(RTRIM(SUBSTRING(@list, START, CASE WHEN stop > 0 THEN stop - START ELSE 0 END)))

    AS ItemValue

    FROM csvtbl

    WHERE stop > 0

    GO

    That's a recursive CTE and can be as bad as a While Loop for CPU and will usually be about 3 times worse on the number of reads. It's just another form of RBAR and it should usually be avoided.

    Yes... you are correct... the search capabilities on this forum suck. I'm sure that Lutz will be back soon with his split function.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (6/27/2010)


    ...

    Yes... you are correct... the search capabilities on this forum suck. I'm sure that Lutz will be back soon with his split function.

    That's something I cannot confirm (at least not in this case...)

    Search string: DelimitedSplit8K returned the following thread on the first page:

    Thread name: SP error

    ... FROM #List l CROSS APPLY DelimitedSplit8K(l.List, @Delimiter) f [/code] See how the DelimitedSplit8K function returns a table that can be joined ... From HC_USERS CROSS JOIN DelimitedSplit8K(@rid, ',')[/code] Finally, clean up ...

    This thread include a post by WayneS showing the DelimitedSplit8K function I'm using at the moment.

    But I admitt, it's always easier to find something once you know where to look and what to look for...

    @mishaluba: As a side note: please check the author of that script and see if the name rings a bell πŸ˜‰

    @jeff: I'm sure you know that I don't have my own split function. Right? πŸ˜€



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • Once again thank you both Lutz and Jeff for your help! There is one more happy developer in the world, he learned something new today πŸ˜€

  • Glad we could help πŸ˜€

    But you did the most important part by providing ready to use sample data, by showing what you've tried so far and clearly explain what your expected output need to look like.

    Posts like that are the ones I enjoy working on since it not only shows that you put some effort into your question. It also implies that you'll use the answer to learn from it and not only to copy and paste and move on.



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • lmu92 (6/28/2010)


    This thread include a post by WayneS showing the DelimitedSplit8K function I'm using at the moment.

    I believe I saw a follow-up post from Jeff where he found that the UNPIVOT (as demonstrated in the above link) turns out to be slower when he ran it on his work prod server several times. Jeff, please let us know if I remember this correctly.

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • WayneS (6/28/2010)


    lmu92 (6/28/2010)


    This thread include a post by WayneS showing the DelimitedSplit8K function I'm using at the moment.

    I believe I saw a follow-up post from Jeff where he found that the UNPIVOT (as demonstrated in the above link) turns out to be slower when he ran it on his work prod server several times. Jeff, please let us know if I remember this correctly.

    You did, indeed. I didn't take the time to analyze "why" but on certain machines with multiple processors, the UNPIVOT method sometimes runs substantially slower. I also didn't understand that the function you good folks were talking about was the function that I posted. Here's the latest and greatest with all the documentation and optimizations that I currently use for production code... the documentation in the header is quite substantial. {EDIT} Updated the code below to include the lastest performance (From yesterday) thanks to Paul White.

    CREATE FUNCTION dbo.DelimitedSplit8K

    /***************************************************************************************************

    Purpose:

    Split a given string at a given delimiter and return a list of the split elements (items).

    Returns:

    iTVF containing the following:

    ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)

    Item = Element value as a VARCHAR(8000)

    CROSS APPLY Usage Example:

    ---------------------------------------------------------------------------------------------------

    --===== Conditionally drop the test tables to make reruns easier for testing.

    -- (this is NOT a part of the solution)

    IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL

    DROP TABLE #JBMTest

    ;

    --===== Create and populate a test table on the fly (this is NOT a part of the solution).

    SELECT *

    INTO #JBMTest

    FROM (

    SELECT 1,'1,10,100,1000,10000,100000,1000000' UNION ALL

    SELECT 2,'2000000,200000,20000,2000,200,20,2' UNION ALL

    SELECT 3, 'This,is,a,test' UNION ALL

    SELECT 4, 'and so is this' UNION ALL

    SELECT 5, 'This, too (no pun intended)'

    ) d (SomeID,SomeValue)

    ;

    GO

    --===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)

    SELECT test.SomeID, split.ItemNumber, split.Item

    FROM #JBMTest test

    CROSS APPLY

    (

    SELECT ItemNumber, Item

    FROM dbo.DelimitedSplit8k(test.SomeValue,',')

    ) split

    ;

    ---------------------------------------------------------------------------------------------------

    Notes:

    1. Optimized for VARCHAR(7999) or less. No testing or error reporting for truncation at 7999

    characters is done.

    2. Optimized for single character delimiter. Multi-character delimiters should be resolved

    externally from this function.

    3. Optimized for use with CROSS APPLY.

    4. Does not "trim" elements just in case leading or trailing blanks are intended.

    5. If you don't know how a Tally table can be used to replace loops, please see the following...

    http://www.sqlservercentral.com/articles/T-SQL/62867/

    6. Changing this function to use VARCHAR(MAX) will cause it to run twice as slow. It's just the

    nature of VARCHAR(MAX) whether it fits in-row or not.

    7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows

    that the UNPIVOT method is quite machine dependent and can slow things down quite a bit.

    8. Performance testing shows using "TOP" for the limiting criteria of "N" is actually

    slower and slightly more CPU intensive than the traditional WHERE N < LEN(@pString) + 2.

    9. Performance testing shows using ORDER BY (SELECT x) where "x" is anything is actually

    slower and slightly more CPU intensive than the traditional ORDER BY (SELECT N).

    Credits:

    This code is the product of many people's efforts including but not limited to the following:

    cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a

    bit of extra speed and finally redacted by Jeff Moden for a different slant on readability and

    compactness. Hat's off to Paul White for his simple explanations of CROSS APPLY. Finally,

    special thanks to Erland Sommarskog for his tireless efforts to help people understand

    what you can actually do with T-SQL. I also thank whoever wrote the first article I ever saw

    on "numbers tables" which is located at the following URL ...

    http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html

    Revision History:

    Rev 00 - 20 Jan 2010 - Concept: Lynn Pettis and others.

    Redaction/Implementation: Jeff Moden

    - Base 10 redaction and reduction for CTE. (Total rewrite)

    Rev 01 - 13 Mar 2010 - Jeff Moden

    - Removed one additional concatenation and one subtraction from the SUBSTRING in the

    SELECT List for that tiny bit of extra speed.

    Rev 02 - 14 Apr 2010 - Jeff Moden

    - No code changes. Added CROSS APPLY usage example to the header, some additional credits,

    and extra documentation.

    Rev 03 - 18 Apr 2010 - Jeff Moden

    - No code changes. Added notes 7, 8, and 9 about certain "optimizations" that dont'

    actually work for this type of function.

    Rev 04 - 29 Jun 2010 - Jeff Moden

    - Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary

    "Table Spool" when the function is used in an UPDATE statement even though the function

    makes no external references.

    ***************************************************************************************************/

    --===== Define I/O parameters

    (

    @pString VARCHAR(7999),

    @pDelimiter CHAR(1)

    )

    RETURNS TABLE

    WITH SCHEMABINDING

    AS

    RETURN

    --===== "Inline" CTE Driven "Tally Table” produces values up to

    -- 10,000... enough to cover VARCHAR(8000)

    WITH

    E1(N) AS ( --=== Create Ten 1's

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 --10

    ),

    E2(N) AS (SELECT 1 FROM E1 a, E1 b), --100

    E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10,000

    cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT N)) FROM E4)

    --===== Do the split

    SELECT ROW_NUMBER() OVER (ORDER BY N) AS ItemNumber,

    SUBSTRING(@pString, N, CHARINDEX(@pDelimiter, @pString + @pDelimiter, N) - N) AS Item

    FROM cteTally

    WHERE N < LEN(@pString) + 2

    AND SUBSTRING(@pDelimiter + @pString, N, 1) = @pDelimiter

    ;

    GO

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Sigh... I sure wish we could put into the briefcase an individual post on the forum... This one of Jeff's certainly would be in it.

    @jeff -

    1. Did you test after adding the WITH SCHEMABINDING? If not, I see a test in my near future. πŸ˜‰

    2. I see you changed the input parameter from varchar(8000) to varchar(7999), but I don't see a reason for that in your notes. Would you elaborate on this please?

    Edit:

    3. Is it safe to assume that a properly configured (Clustered index, 100% FillFactor) real tally table would perform better than the inline one (the comment Scott made below brings this up)?

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • So, the inline CTE has better performance than a tally table? Interesting. Sometimes SQL's performance traits are a bit quirky.

    Scott Pletcher, SQL Server MVP 2008-2010

  • WayneS (6/29/2010)


    2. I see you changed the input parameter from varchar(8000) to varchar(7999), but I don't see a reason for that in your notes. Would you elaborate on this please?

    I think I can answer that one ...

    He does a @pString + @pDelimiter to search for the @pDelimiter (to guarantee it will be found) and that total searched expression is limited to 8000 characters.

    I read the other thread where Paul was discussing his findings. I was talking to someone at work about it and would like to find it again if anyone happens to know the link.

    *edit: Nevermind about the link, found it here.

    β””> bt



    Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • WayneS (6/29/2010)


    Sigh... I sure wish we could put into the briefcase an individual post on the forum... This one of Jeff's certainly would be in it.

    @jeff -

    1. Did you test after adding the WITH SCHEMABINDING? If not, I see a test in my near future. πŸ˜‰

    2. I see you changed the input parameter from varchar(8000) to varchar(7999), but I don't see a reason for that in your notes. Would you elaborate on this please?

    Edit:

    3. Is it safe to assume that a properly configured (Clustered index, 100% FillFactor) real tally table would perform better than the inline one (the comment Scott made below brings this up)?

    1. I didn't test this one specifically for the WITH SCHEMABINDING because every other test I ran with schemabinding showed either no change or a minor improvement and I'm also being a bit lazy... I wanted to post the function and didn't have the time to do the normal million row testing on this specific function. I did so without much fear because the tests on other functions show that WITH SCHEMABINDING is one of those things you can pretty much take to the bank. You know me... I'll get to retesting it soon, though.

    2. I changed the input parameter to 7999 as a clue to folks that you have to save space to include at least 1 extra delimiter in the code (@pDelimiter + @pString and @pString + @pDelimiter). I guess I should spell out the reason in note 1.

    3. Again, I've not specifically tested for this specific function. In all other functions that I've test, both the Tally Table and the Tally CTE perform equally well although the Tally CTE does it virtually without reads. Again, I'll be doing more testing especially since I made the WITH SCHEMABINDING change.

    As far as a brief case goes... once I get done retesting the function, I aim to include it in an update on the Tally Table article.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • scott.pletcher (6/29/2010)


    So, the inline CTE has better performance than a tally table? Interesting. Sometimes SQL's performance traits are a bit quirky.

    In other functions, I've not found that to be true. The Tally Table normally squeaks past the CTE for duration but the CTE gens virtually no reads. I guess it depends on how you want to impact the system. When I get around to the Tally Table article rewrite, I'll include million row testing for both methods.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff,

    I would like to ask your opinion on the version of the split I recently wrote. It is implemented as a stored proc and does not have any limitation on the size of the string to split. Performance tests that I ran show that feeding text consisting of 10,000 sentences (dot delimited) to it (about 500,000 characters in total or 1 MB in datalength) takes about 0.3 seconds to execute, and expanding the size 10-fold to 100,000 sentences (5 mln characters in total or 10 MB in datalength) increases the execution time also about 10-fold to a total of 3 seconds. This is on a 2-year old desktop with Windows XP SP3, 2 GB of RAM duo-core CPU. The idea is to utilize the engine's ability to process xml:

    create proc dbo.usp_DelimitedSplit

    (

    @text nvarchar(max),

    @delimiter char(1),

    @entitize bit = 1

    )

    as

    begin

    declare @xml xml;

    if @entitize = 1 set @text = (select @text for xml path(''));

    set @xml = '<r>' + replace(@text, @delimiter, '</r><r>') + '</r>';

    select

    row_number() over (order by (select null)) item_number,

    item.value('text()[1]', 'varchar(max)') item_value

    from @xml.nodes('//r') R(item);

    end;

    go

    As you can see, the procedure is very simple. The optional @entitize parameter deserves an explanation though. It is there to ensure that if the text contains some "xml unfriendly" characters then these are entitized by default. However, it leaves the option to the calling code to pass 0 as @entitize value should the caller be aware that the text has already undergone entitizing treatment prior to the procedure execution. For example,

    declare @delimiter char(1);

    declare @text nvarchar(max);

    select @delimiter = '.', @text = 'a<x>&z.b.c';

    exec dbo.usp_DelimitedSplit @text, @delimiter;

    automatically handles unfriendly characters returning

    item_number item_value

    -------------------------------

    1 a<x>&z

    2 b

    3 c

    Here is the test I used to start from scratch, pad the stirng with 10,000 sentences and then call the procedure to split it. The number 10000 is hard-coded, but can be changed to whatever arbitrary number:

    declare @delimiter char(1);

    declare @text nvarchar(max);

    set @delimiter = '.';

    -- populate @text like this

    -- "This is a<&>/><x>& part of the text to split into rows_000001."

    -- "This is a<&>/><x>& part of the text to split into rows_000002." etc

    set

    @text = stuff(cast(

    (

    select top 10000

    @delimiter + 'This is a<&>/><x>& part of the text to split into rows_' +

    replace(str(row_number() over(order by a.[object_id]), 6), ' ', '0')

    from sys.objects a cross join sys.objects b

    for xml path('')

    ) as varchar(max)), 1, 1, '');

    -- because the @text happens to be already entitized,

    -- pass 0 as @entitize parameter value

    exec dbo.usp_DelimitedSplit @text, @delimiter, 0;

    The above returns:

    item_number item_value

    ----------------------------------------------------------------------------------

    1 This is a<&>/><x>& part of the text to split into rows_000001

    2 This is a<&>/><x>& part of the text to split into rows_000002

    /* ... abridged ...*/

    10000 This is a<&>/><x>& part of the text to split into rows_010000

    Thanks,

    Oleg

Viewing 15 posts - 1 through 15 (of 47 total)

You must be logged in to reply to this topic. Login to reply