Handling delimted rows

  • I am pasting two files

    P101|101|Venks|Balan|Krishnan|M|08-08-76|Pan5555555|Address5|Address6|686036|55555|2|20.05|06-07-08|25.00

    P102|101|Anoop|Krishnan|Balu|M|08-02-77|Pan6666666|Madiwala|Bangalore|686036|66666|3|30.05|08-07-08|35.00

    P103|102|Roy|Mathew|Kakkanattu|M|08-06-74|Pan7777777|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-08|55.00

    P104|102|Venks|Balan|Krishnan|M|08-08-76|Pan5555555|Address5|Address6|686036|55555|2|20.05|06-07-08|25.00

    P101|103|Anoop|Krishnan|Balu|M|08-02-77|Pan6666666|Madiwala|Bangalore|686036|66666|3|30.05|08-07-08|35.00

    P102|104|Roy|Mathew|Kakkanattu|M|08-06-74|Pan7777777|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-08|55.00

    P103|105|Roy|Mathew|Kakkanattu|F|08-06-74|Pan7777777|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-0|8|55.00

    P104|105|Roy|Mathew|Kakkanattu|M|08-06-74|Pan7777333|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-0|8|55.00

    P109|106|Roy|Mathew|Kakkanattu|F|08-06-75|Pan7777777|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-0|8|55.00

    P110|107|Roy|Mathew|Kakkanattu|F|08-06-75|Pan7777777|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-0|8|55.00

    Hi i loaded the above into sql server

    P103105RoyMathewKakkanattuF1974-08-06Pan7777777Rt NagarBangalore561276777774502000-11-07NULL

    P104105RoyMathewKakkanattuM1974-08-06Pan7777333Rt NagarBangalore561276777774502000-11-07NULL

    P109106RoyMathewKakkanattuF1975-08-06Pan7777777Rt NagarBangalore561276777774502000-11-07NULL

    P110107RoyMathewKakkanattuF1975-08-06Pan7777777Rt NagarBangalore561276777774502000-11-07NULL

    There is problem with the source for date column 11-07-08 there is some | is there.This i want to remove.For first file data is correct.then second file there is vertical bar in date column.how can i handle this dynamically.please send reply.

    Malleswarareddy
    I.T.Analyst
    MCITP(70-451)

  • Import the records from the file as if each record only contains 1 very long text-field, delete the extra |, then run it through your "normal code".

    You could even make a pre-process that inspects the number of |-symbols in the records, and runs records through "cleaning" if there are too many |.

    Peter Rijs
    BI Consultant, The Netherlands

  • With remove the | bar i already done.But the source file they told that do not edit the source file.means that i did not delete | in source.Is it possible to do it using SSIS.

    Malleswarareddy
    I.T.Analyst
    MCITP(70-451)

  • yes, you can.

    My previous answer didn't explicitly state so, since this is the Integration Services forum, but I meant you could do that steps in an SSIS package. Instead of saving it in the original source file, you could either:

    - do further processing in the dataflow pipeline

    - save the results in a temporary file or table, and process that with your original package

    I think that using RAW as internediate destination/source here would complicate things, since you can not change the column definition of a raw file AFAIK.

    Peter Rijs
    BI Consultant, The Netherlands

  • What you would need to do is to load these files into a staging table with one large column.

    Determine which rows have the extra delimiter, and delete the extra delimiter.

    Here's an example. It uses the DelimitedSplit8K function, modified for your case (I added the ItemStartPos column). The function is in the attachment following post from Steve.

    You can now use this function to determine how many delimiters the line has, and to deal with it. This code will detect the rows with extra delimiters, and remove the extra one in the staging table. This code is also in the attachment following post from Steve.

    At this point, you can use the staging table as a Data Source. Alternatively, you could use a cross-tab (see the two links in my signature dealing with Cross-Tab and Pivot tables) to return the data already split apart (just use the DelimitedSplit8k function again) - just specify the source as a query, and use that cross-tab code as the query code.

    Edit: Thank you Steve for posting the code from my attachment. (Wish we could figure out why code won't post sometimes!)

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • -- latest version of function is at http://www.sqlservercentral.com/Forums/FindPost944589.aspx

    IF OBJECT_ID('dbo.DelimitedSplit8K') IS NOT NULL DROP FUNCTION dbo.DelimitedSplit8K

    GO

    CREATE FUNCTION dbo.DelimitedSplit8K

    /***************************************************************************************************

    Purpose:

    Split a given string at a given delimiter and return a list of the split elements (items).

    Returns:

    iTVF containing the following:

    ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)

    Item = Element value as a VARCHAR(8000)

    CROSS APPLY Usage Example:

    ---------------------------------------------------------------------------------------------------

    --===== Conditionally drop the test tables to make reruns easier for testing.

    -- (this is NOT a part of the solution)

    IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL

    DROP TABLE #JBMTest

    ;

    --===== Create and populate a test table on the fly (this is NOT a part of the solution).

    SELECT *

    INTO #JBMTest

    FROM (

    SELECT 1,'1,10,100,1000,10000,100000,1000000' UNION ALL

    SELECT 2,'2000000,200000,20000,2000,200,20,2' UNION ALL

    SELECT 3, 'This,is,a,test' UNION ALL

    SELECT 4, 'and so is this' UNION ALL

    SELECT 5, 'This, too (no pun intended)'

    ) d (SomeID,SomeValue)

    ;

    GO

    --===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)

    SELECT test.SomeID, split.ItemNumber, split.Item

    FROM #JBMTest test

    CROSS APPLY

    (

    SELECT ItemNumber, Item

    FROM dbo.DelimitedSplit8k(test.SomeValue,',')

    ) split

    ;

    ---------------------------------------------------------------------------------------------------

    Notes:

    1. Optimized for VARCHAR(7999) or less. No testing or error reporting for truncation at 7999

    characters is done.

    2. Optimized for single character delimiter. Multi-character delimiters should be resolved

    externally from this function.

    3. Optimized for use with CROSS APPLY.

    4. Does not "trim" elements just in case leading or trailing blanks are intended.

    5. If you don't know how a Tally table can be used to replace loops, please see the following...

    http://www.sqlservercentral.com/articles/T-SQL/62867/

    6. Changing this function to use VARCHAR(MAX) will cause it to run twice as slow. It's just the

    nature of VARCHAR(MAX) whether it fits in-row or not.

    7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows

    that the UNPIVOT method is quite machine dependent and can slow things down quite a bit.

    8. Performance testing shows using "TOP" for the limiting criteria of "N" is actually

    slower and slightly more CPU intensive than the traditional WHERE N < LEN(@pString) + 2.

    9. Performance testing shows using ORDER BY (SELECT x) where "x" is anything is actually

    slower and slightly more CPU intensive than the traditional ORDER BY (SELECT N).

    Credits:

    This code is the product of many people's efforts including but not limited to the following:

    cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a

    bit of extra speed and finally redacted by Jeff Moden for a different slant on readability and

    compactness. Hat's off to Paul White for his simple explanations of CROSS APPLY. Finally,

    special thanks to Erland Sommarskog for his tireless efforts to help people understand

    what you can actually do with T-SQL. I also thank whoever wrote the first article I ever saw

    on "numbers tables" which is located at the following URL ...

    http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html

    Revision History:

    Rev 00 - 20 Jan 2010 - Concept: Lynn Pettis and others.

    Redaction/Implementation: Jeff Moden

    - Base 10 redaction and reduction for CTE. (Total rewrite)

    Rev 01 - 13 Mar 2010 - Jeff Moden

    - Removed one additional concatenation and one subtraction from the SUBSTRING in the

    SELECT List for that tiny bit of extra speed.

    Rev 02 - 14 Apr 2010 - Jeff Moden

    - No code changes. Added CROSS APPLY usage example to the header, some additional credits,

    and extra documentation.

    Rev 03 - 18 Apr 2010 - Jeff Moden

    - No code changes. Added notes 7, 8, and 9 about certain "optimizations" that dont'

    actually work for this type of function.

    Rev 04 - 29 Jun 2010 - Jeff Moden

    - Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary

    "Table Spool" when the function is used in an UPDATE statement even though the function

    makes no external references.

    ***************************************************************************************************/

    --===== Define I/O parameters

    (

    @pString VARCHAR(7999),

    @pDelimiter CHAR(1)

    )

    RETURNS TABLE

    WITH SCHEMABINDING

    AS

    RETURN

    --===== "Inline" CTE Driven "Tally Table” produces values up to

    -- 10,000... enough to cover VARCHAR(8000)

    WITH

    E1(N) AS ( --=== Create Ten 1's

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 --10

    ),

    E2(N) AS (SELECT 1 FROM E1 a, E1 b), --100

    E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10,000

    cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT N)) FROM E4)

    --===== Do the split

    SELECT ROW_NUMBER() OVER (ORDER BY N) AS ItemNumber,

    SUBSTRING(@pString, N, CHARINDEX(@pDelimiter, @pString + @pDelimiter, N) - N) AS Item,

    N AS ItemStartPos

    FROM cteTally

    WHERE N < LEN(@pString) + 2

    AND SUBSTRING(@pDelimiter + @pString, N, 1) = @pDelimiter

    ;

    GO

    -- ****************************************************************************

    -- End of Modified DelimitedSplit8K

    -- ****************************************************************************

    -- See how this starts off with a table and data in it?

    -- If you had provided us the data in this format,

    -- it would have made things easier for all of the

    -- volunteers on this site to help you out.

    declare @test-2 table (RowID INT IDENTITY, Col1 varchar(1000));

    insert into @test-2

    SELECT 'P101|101|Venks|Balan|Krishnan|M|08-08-76|Pan5555555|Address5|Address6|686036|55555|2|20.05|06-07-08|25.00' UNION ALL

    SELECT 'P102|101|Anoop|Krishnan|Balu|M|08-02-77|Pan6666666|Madiwala|Bangalore|686036|66666|3|30.05|08-07-08|35.00' UNION ALL

    SELECT 'P103|102|Roy|Mathew|Kakkanattu|M|08-06-74|Pan7777777|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-08|55.00' UNION ALL

    SELECT 'P104|102|Venks|Balan|Krishnan|M|08-08-76|Pan5555555|Address5|Address6|686036|55555|2|20.05|06-07-08|25.00' UNION ALL

    SELECT 'P101|103|Anoop|Krishnan|Balu|M|08-02-77|Pan6666666|Madiwala|Bangalore|686036|66666|3|30.05|08-07-08|35.00' UNION ALL

    SELECT 'P102|104|Roy|Mathew|Kakkanattu|M|08-06-74|Pan7777777|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-08|55.00' UNION ALL

    SELECT 'P103|105|Roy|Mathew|Kakkanattu|F|08-06-74|Pan7777777|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-0|8|55.00' UNION ALL

    SELECT 'P104|105|Roy|Mathew|Kakkanattu|M|08-06-74|Pan7777333|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-0|8|55.00' UNION ALL

    SELECT 'P109|106|Roy|Mathew|Kakkanattu|F|08-06-75|Pan7777777|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-0|8|55.00' UNION ALL

    SELECT 'P110|107|Roy|Mathew|Kakkanattu|F|08-06-75|Pan7777777|Rt Nagar|Bangalore|561276|77777|4|50.00|11-07-0|8|55.00';

    WITH CTE AS

    (

    -- Split the delimited columns out into a table,

    -- then count the number of rows created

    -- The having clause get just the rows with extra delimiters

    SELECT t.RowID, t.Col1,

    ColumnCount = max(ds.ItemNumber)

    FROM @test-2 t

    CROSS APPLY dbo.DelimitedSplit8K(t.Col1, '|') ds

    GROUP BY t.RowID, t.Col1

    HAVING MAX(ds.ItemNumber) > 16

    )

    -- Update the table to remove the extra delimiter.

    UPDATE t

    SET Col1 = STUFF(t.Col1, ds.ItemStartPos-1,1,'')

    FROM @test-2 t

    -- Can't update the CTE since it has an aggregrate,

    -- so join to the @test-2 table to get the row to update.

    JOIN CTE ON t.RowID = CTE.RowID

    CROSS APPLY dbo.DelimitedSplit8K(CTE.Col1, '|') ds

    WHERE ds.ItemNumber = 16;

    -- Show the results.

    SELECT t.RowID, ds.ItemNumber, ds.Item

    FROM @test-2 t

    CROSS APPLY dbo.DelimitedSplit8K(t.Col1, '|') ds

    ORDER BY t.RowID, ds.ItemNumber;

  • Nicely worked out solution, I liked some comments in your code a lot :w00t:

    Peter Rijs
    BI Consultant, The Netherlands

  • Peter Rijs (10/29/2010)


    Nicely worked out solution, I liked some comments in your code a lot :w00t:

    Thanks. Just realize that the function is coded by Jeff Moden, not me.

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply