comma separate value show as table

  • I have One table tbaleOne

    Column1

    -------------------------

    AA,BB,CC

    DD,EE,FF

    GG,HH,II

    I need out put

    col1col2col2

    ---------------------------------------------------------

    AABBCC

    DDEEFF

    GGHHII

  • Is the number of columns fixed/known like 3 as shown in the above example?

  • The easiest way is probably to use the CSV Splitter function, by Jeff Moden (with some assistance from others) - http://www.sqlservercentral.com/articles/Tally+Table/72993/.

  • This will do what the OP requested, but highly unlikely it matches the real requirement, poor specification = poor solution:crying:.

    select SUBSTRING(Column1,1,2) as col1

    ,SUBSTRING(Column1,4,2) as col2

    ,SUBSTRING(Column1,7,2) as col2

    from tbaleOne

    I expect the splitter solution as suggested by BrainDonor will turn out to be the desired solution.

  • BrainDonor (8/16/2012)


    The easiest way is probably to use the CSV Splitter function, by Jeff Moden (with some assistance from others) - http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    The splitter function will return 3 rows per input row, not 3 columns as was requested. If you apply the splitter function, you will still have to unpivot the resulting rows into columns. And this is where Suresh B.'s question comes in: how many values are there in the input? If there are always 3 values, then it may be easier to do this with charindex() and substring(). If the number of values can vary you should have a look at the link for cross tab in my signature: using a cross tab you can turn the rows into columns again. But also when it is always 3 values you should still read it.

    Here's a little example of how you can turn the rows back into columns using a cross tab:

    declare @tbaleOne table (

    tbale_id int identity(1,1) not null,

    Column1 varchar(100) not null,

    primary key(tbale_id)

    );

    insert @tbaleOne( Column1)

    select 'AA,BB,CC'

    union all select 'DD,EE,FF'

    union all select 'GG,HH,II';

    set ansi_warnings off;

    select max(case x.pos when 1 then x.val end) as col1,

    max(case x.pos when 4 then x.val end) as col2,

    max(case x.pos when 7 then x.val end) as col3

    from @tbaleOne i

    cross apply (

    select 1 as pos, substring(i.Column1, 1, 2) as val

    union all

    select 4 as pos, substring(i.Column1, 4, 2) as val

    union all

    select 7 as pos, substring(i.Column1, 7, 2) as val

    ) x

    group by i.tbale_id;

    set ansi_warnings on;



    Posting Data Etiquette - Jeff Moden[/url]
    Posting Performance Based Questions - Gail Shaw[/url]
    Hidden RBAR - Jeff Moden[/url]
    Cross Tabs and Pivots - Jeff Moden[/url]
    Catch-all queries - Gail Shaw[/url]


    If you don't have time to do it right, when will you have time to do it over?

  • In particular simple case, when: 1) there not more than 4 columns, 2) column data fits nvarchar(128) (sysname), 3) no dots in data - you may use parsename function like this:

    declare @t table(c varchar(100))

    insert @t values ('AA,BB,CC'),('DD,EE,FF'),('GG,HH,II')

    select

    col1 = parsename(replace(c,',','.'),3),

    col2 = parsename(replace(c,',','.'),2),

    col3 = parsename(replace(c,',','.'),1)

    from

    @t

    But for all cases, for universal stuation, you sould better use csv splitter function mentioned above.


    I am really sorry for my poor gramma. And I hope that value of my answers will outweigh the harm for your eyes.
    Blog: http://somewheresomehow.ru[/url]
    Twitter: @SomewereSomehow

  • vishnu 9013 (8/16/2012)


    I have One table tbaleOne

    Column1

    -------------------------

    AA,BB,CC

    DD,EE,FF

    GG,HH,II

    I need out put

    col1col2col2

    ---------------------------------------------------------

    AABBCC

    DDEEFF

    GGHHII

    You need such output where?

    _____________
    Code for TallyGenerator

  • Here's another way that is a bit faster than using PARSENAME.

    SET NOCOUNT ON;

    --== SOME SAMPLE DATA ==--

    IF object_id('tempdb..#testEnvironment') IS NOT NULL

    BEGIN

    DROP TABLE #testEnvironment;

    END;

    CREATE TABLE #testEnvironment

    (c VARCHAR(100))

    --1,000,000 Random rows of data

    ;WITH Tally (n) AS (

    SELECT TOP 1000000 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))

    FROM sys.all_columns t1 CROSS JOIN sys.all_columns t2)

    INSERT INTO #testEnvironment

    SELECT CAST(ABS(CHECKSUM(NEWID()) % 15) AS VARCHAR(50)) + ',' +

    CAST(ABS(CHECKSUM(NEWID()) % 15) AS VARCHAR(50)) + ',' +

    CAST(ABS(CHECKSUM(NEWID()) % 15) AS VARCHAR(50))

    FROM Tally;

    --Holder variable to take display time out of the equation

    DECLARE @HOLDER1 VARCHAR(100), @HOLDER2 VARCHAR(100), @HOLDER3 VARCHAR(100);

    PRINT REPLICATE('=',80);

    PRINT 'PARSENAME';

    PRINT REPLICATE('=',80);

    SET STATISTICS IO, TIME ON;

    select

    @HOLDER1 = parsename(replace(c,',','.'),3),

    @HOLDER2 = parsename(replace(c,',','.'),2),

    @HOLDER3 = parsename(replace(c,',','.'),1)

    from

    #testEnvironment

    SET STATISTICS IO, TIME OFF;

    PRINT REPLICATE('=',80);

    PRINT 'SUBSTRING';

    PRINT REPLICATE('=',80);

    SET STATISTICS IO, TIME ON;

    SELECT @HOLDER1=SUBSTRING(c, 1, n1 - 1)

    ,@HOLDER2=SUBSTRING(c, n1 + 1, LEN(c) - (n2 + 1))

    ,@HOLDER3=SUBSTRING(c, n2 + 2, LEN(c))

    FROM #testEnvironment

    CROSS APPLY (SELECT CHARINDEX(',', c), LEN(c) - CHARINDEX(',', REVERSE(c))) a(n1, n2)

    SET STATISTICS IO, TIME OFF;

    DROP TABLE #testEnvironment

    Timing results:

    ================================================================================

    PARSENAME

    ================================================================================

    Table '#testEnvironment____________________________________________________________________________________________________0000000000AD'. Scan count 1, logical reads 2351, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 2481 ms, elapsed time = 2467 ms.

    ================================================================================

    SUBSTRING

    ================================================================================

    Table '#testEnvironment____________________________________________________________________________________________________0000000000AD'. Scan count 1, logical reads 2351, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 1653 ms, elapsed time = 1673 ms.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • Hi if there case one. when coma is exactly after two character .

    declare @t table(c varchar(100))

    insert @t values ('AA,BB,CC'),('DD,EE,FF'),('GG,HH,II')

    select

    col1 = SUBSTRING(c,1,2),

    col2 = SUBSTRING(c,4,2),

    col3 = SUBSTRING(c,7,2)

    from

    @t

    -------2

    if there is no rule when coma will be there then write 3 split function which return

    first return first part when string is passed ie -if we passed 'aaa,bb,cc' --- will return - 'aaa'

    second return second part when string is passed ie -if we passed 'aaa,bb,cc' --- will return - 'bb'

    third return third part when string is passed ie -if we passed 'aaa,bb,cc' --- will return - 'cc'

    ---

  • dwain.c (8/16/2012)


    Here's another way that is a bit faster than using PARSENAME.

    If we speak for performance aspect, I think it is not parse name, rather it is replace impact, because of its design.

    I rerun your test specifying collation explicitly like that

    select

    @HOLDER1 = parsename(replace(c COLLATE LATIN1_GENERAL_BIN,',','.'),3),

    @HOLDER2 = parsename(replace(c COLLATE LATIN1_GENERAL_BIN,',','.'),2),

    @HOLDER3 = parsename(replace(c COLLATE LATIN1_GENERAL_BIN,',','.'),1)

    from

    #testEnvironment

    and got those numbers

    ================================================================================

    PARSENAME

    ================================================================================

    Table '#testEnvironment____________________________________________________________________________________________________000000000043'. Scan count 1, logical reads 2351, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 1875 ms, elapsed time = 1870 ms.

    ================================================================================

    SUBSTRING

    ================================================================================

    Table '#testEnvironment____________________________________________________________________________________________________000000000043'. Scan count 1, logical reads 2351, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 2203 ms, elapsed time = 2199 ms.

    I also noticed an interesting thing if you specify collation explicitly for substring example - it has no effect. Parsename wins.

    But, if you create a table like:

    CREATE TABLE #testEnvironment (c VARCHAR(100) COLLATE LATIN1_GENERAL_BIN)

    than substring wins again, with this numbers in my tests: SUBSTRING-1334 ms VS PARSENAME 1676 ms.

    So, I think, talking about performance we may conclude, that as usual, it depends =)


    I am really sorry for my poor gramma. And I hope that value of my answers will outweigh the harm for your eyes.
    Blog: http://somewheresomehow.ru[/url]
    Twitter: @SomewereSomehow

  • SomewhereSomehow (8/17/2012)


    dwain.c (8/16/2012)


    Here's another way that is a bit faster than using PARSENAME.

    If we speak for performance aspect, I think it is not parse name, rather it is replace impact, because of its design.

    I rerun your test specifying collation explicitly like that

    select

    @HOLDER1 = parsename(replace(c COLLATE LATIN1_GENERAL_BIN,',','.'),3),

    @HOLDER2 = parsename(replace(c COLLATE LATIN1_GENERAL_BIN,',','.'),2),

    @HOLDER3 = parsename(replace(c COLLATE LATIN1_GENERAL_BIN,',','.'),1)

    from

    #testEnvironment

    and got those numbers

    ================================================================================

    PARSENAME

    ================================================================================

    Table '#testEnvironment____________________________________________________________________________________________________000000000043'. Scan count 1, logical reads 2351, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 1875 ms, elapsed time = 1870 ms.

    ================================================================================

    SUBSTRING

    ================================================================================

    Table '#testEnvironment____________________________________________________________________________________________________000000000043'. Scan count 1, logical reads 2351, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 2203 ms, elapsed time = 2199 ms.

    I also noticed an interesting thing if you specify collation explicitly for substring example - it has no effect. Parsename wins.

    But, if you create a table like:

    CREATE TABLE #testEnvironment (c VARCHAR(100) COLLATE LATIN1_GENERAL_BIN)

    than substring wins again, with this numbers in my tests: SUBSTRING-1334 ms VS PARSENAME 1676 ms.

    So, I think, talking about performance we may conclude, that as usual, it depends =)

    Yes, interesting results indeed. I remembered that I forgot to account for the collation bug in REPLACE but didn't have time to go back and rerun that way. I'm glad somebody did. 🙂


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • Just to clarify... there is no bug with REPLACE. It's a bug with just about anything that uses the "wrong" COLLATION. Even simple JOINs can be horribly slow if the COLLATION is set to one of the "slower" COLLATIONs.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (8/17/2012)


    Just to clarify... there is no bug with REPLACE. It's a bug with just about anything that uses the "wrong" COLLATION. Even simple JOINs can be horribly slow if the COLLATION is set to one of the "slower" COLLATIONs.

    Jeff - You are being enigmatic again.

    Could you expound on this? I've only heard of the issue relating to REPLACE. Can you describe or link to some other examples of this?

    After hearing about this, I tried using different collating sequences on some additional operations but I never found an impact on anything but REPLACE.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • dwain.c (8/18/2012)


    Jeff Moden (8/17/2012)


    Just to clarify... there is no bug with REPLACE. It's a bug with just about anything that uses the "wrong" COLLATION. Even simple JOINs can be horribly slow if the COLLATION is set to one of the "slower" COLLATIONs.

    Jeff - You are being enigmatic again.

    Heh... gosh, no. No enigma here. What I said can be taken pretty much as I said it. That, notwithstanding, the following does show why you might think it an enigma. 😀

    Could you expound on this? I've only heard of the issue relating to REPLACE. Can you describe or link to some other examples of this?

    After hearing about this, I tried using different collating sequences on some additional operations but I never found an impact on anything but REPLACE.

    I guess a demonstration is in order. More details are available in the code but, to summarize, there is a "language tax" in the form of processing time for most collations other than "binary" ones. Of course, I don't recommend binary collations as a default unless you really, really like case and accent sensitive databases. I personally don't care for them. Please read the warning about TempDB in the comments before you decide to change a default collation on an existing instance or table.

    /*****************************************************************************************

    Proof that the "REPLACE BUG" actually has nothing to do with REPLACE. Rather, it's just a

    "simple" "cost of doing business" based on the collation of the data. There seems to be

    quite a processing "tax" based on some of the non-Latin based collations.

    PLEASE READ ABOUT COLLATE AND HOW MAKING CHANGES TO THE DEFAULT COLLATION CAN CRUSH

    PERFORMANCE WHEN TEMPDB IS INVOLVED>

    Of course, most binary collations blow the doors off of most other collations.

    --Jeff Moden - 19 Aug 2012

    *****************************************************************************************/

    --========================================================================================

    -- Presets

    --========================================================================================

    --===== Supress the auto-display of rowcounts to clean up the display results a bit.

    SET NOCOUNT ON;

    --===== Conditionally drop the test tables to make reruns easier in SSMS.

    IF OBJECT_ID('tempdb..#Test1','U') IS NOT NULL DROP TABLE #Test1;

    IF OBJECT_ID('tempdb..#Test2','U') IS NOT NULL DROP TABLE #Test2;

    IF OBJECT_ID('tempdb..#Test3','U') IS NOT NULL DROP TABLE #Test3;

    IF OBJECT_ID('tempdb..#Test4','U') IS NOT NULL DROP TABLE #Test4;

    GO

    --========================================================================================

    -- Create 3 and populate two nearly identical tables with identical values.

    -- The only differences between the tables are their names and the colloation used.

    -- The #Test1 table uses a "slow" collation.

    -- The #Test2 table uses a "fast" collation that just happens to be the default for

    -- SQL Server in the United States of America.

    -- The #Test3 table uses nearly the same collation as #Test2 except that it is

    -- "accent insensitive", as well

    -- The #Test4 table uses one of the fasted collations IF you can tolerate case

    -- and accent sensitivity for the query.

    --========================================================================================

    SELECT TOP 100000

    SomeString = REPLICATE(CAST(NEWID() AS VARCHAR(500)),10) COLLATE Czech_CI_AI

    INTO #Test1

    FROM sys.all_columns ac1,

    sys.all_columns ac2

    ;

    GO

    SELECT SomeString = SomeString COLLATE SQL_Latin1_General_CP1_CI_AS

    INTO #Test2

    FROM #Test1

    ;

    GO

    SELECT SomeString = SomeString COLLATE SQL_LATIN1_GENERAL_CP1_CI_AI

    INTO #Test3

    FROM #Test1

    ;

    GO

    SELECT SomeString = SomeString COLLATE LATIN1_GENERAL_BIN

    INTO #Test4

    FROM #Test1

    ;

    GO

    --=========================================================================================

    -- Performance tests. Note that the only things different in any of these tests

    -- are the collation of the column in the table and the collation of the string

    -- literal which has been changed to match the collation of the column being searched.

    --=========================================================================================

    PRINT '========== COLLATION = Czech_CI_AI =========='

    SET STATISTICS TIME ON;

    SELECT COUNT(*)

    FROM #Test1

    WHERE SomeString LIKE '%AB%' COLLATE Czech_CI_AI;

    SET STATISTICS TIME OFF;

    GO

    PRINT '========== COLLATION = SQL_Latin1_General_CP1_CI_AS =========='

    SET STATISTICS TIME ON;

    SELECT COUNT(*)

    FROM #Test2

    WHERE SomeString LIKE '%AB%' COLLATE SQL_Latin1_General_CP1_CI_AS;

    SET STATISTICS TIME OFF;

    GO

    PRINT '========== COLLATION = SQL_LATIN1_GENERAL_CP1_CI_AI =========='

    SET STATISTICS TIME ON;

    SELECT COUNT(*)

    FROM #Test3

    WHERE SomeString LIKE '%AB%' COLLATE SQL_LATIN1_GENERAL_CP1_CI_AI;

    SET STATISTICS TIME OFF;

    GO

    PRINT '========== COLLATION = LATIN1_GENERAL_BIN =========='

    SET STATISTICS TIME ON;

    SELECT COUNT(*)

    FROM #Test4

    WHERE SomeString LIKE '%AB%' COLLATE LATIN1_GENERAL_BIN;

    SET STATISTICS TIME OFF;

    GO

    PRINT '========== Use LATIN1_GENERAL_BIN Collation on the Czech_CI_AI Collation Table =========='

    SET STATISTICS TIME ON;

    SELECT COUNT(*)

    FROM #Test1

    WHERE SomeString LIKE '%AB%' COLLATE LATIN1_GENERAL_BIN;

    SET STATISTICS TIME OFF;

    GO

    Here are the results on my ol' war-horse desktop.

    ========== COLLATION = Czech_CI_AI ==========

    SQL Server Execution Times:

    CPU time = 14625 ms, elapsed time = 15100 ms.

    ========== COLLATION = SQL_Latin1_General_CP1_CI_AS ==========

    SQL Server Execution Times:

    CPU time = 1203 ms, elapsed time = 1310 ms.

    ========== COLLATION = SQL_LATIN1_GENERAL_CP1_CI_AI ==========

    SQL Server Execution Times:

    CPU time = 1172 ms, elapsed time = 1230 ms.

    ========== COLLATION = LATIN1_GENERAL_BIN ==========

    SQL Server Execution Times:

    CPU time = 484 ms, elapsed time = 1137 ms.

    ========== Use LATIN1_GENERAL_BIN Collation on the Czech_CI_AI Collation Table ==========

    SQL Server Execution Times:

    CPU time = 1375 ms, elapsed time = 1380 ms.

    Like I said, this is NOT a bug in REPLACE. It affects everything when comparing strings.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • No longer an enigma! This explains it pretty well.

    I guess I'm going to need to read up some on collations.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply