Dealing with Incomplete Data - A T-SQL Puzzle 1

  • Comments posted to this topic are about the item Dealing with Incomplete Data - A T-SQL Puzzle 1


    Kindest Regards,

    M Suresh Kumar

  • What a really nicely written article to start the day off with, very enjoyable.

    Suresh, have you tried using CROSS APPLY to partition the rows into "records" as an alternative to calculating the upper and lower bound? You can then sequence the rows within each "record", then by joining the result to itself (staggered by one row), identification of missing data is quite simple:

    ;WITH OrderedData AS (

    SELECT

    Seq_No,

    cdata,

    x.RowID,

    rn = ROW_NUMBER() OVER(PARTITION BY x.RowID ORDER BY p.Seq_No)

    FROM #Test_Table p

    CROSS APPLY (

    SELECT RowID = MAX(Seq_No)

    FROM #Test_Table

    WHERE CData = '*********************************************' AND Seq_No <= p.Seq_No

    ) x

    )

    SELECT t.*

    FROM OrderedData t

    WHERE t.RowID IN (

    SELECT r1.RowID

    FROM OrderedData r1

    INNER JOIN OrderedData r2

    ON r2.RowID = r1.RowID AND r2.rn = r1.rn + 1

    WHERE r1.CData = 'RateQuote' AND r2.CData = ' ')

    I wonder how this would perform against the original?

    Cheers

    ChrisM

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Dear Sir,

    Very good use of CTE. I learn to know the real strength of CTE after reading this article.

    The best thing about CTE is that it keeps the readability of the code. Its very easy to understand the approach of a developer in the solutions based on CTE feature.

    and also the real life Scenario u have given here helps me to understand the kind of problems and real world situation that can arise before DBA.

    I am new in this forum and I want to be good DBA.

    Now i know that i am going to learn a lot here.

    Thanks...

  • Dear Sir,

    This is really very good alternative solution.

    Can't I delete the selected bad rows in the same statement like this....

    WITH OrderedData AS (

    SELECT

    Seq_No,

    cdata,

    x.RowID,

    rn = ROW_NUMBER() OVER(PARTITION BY x.RowID ORDER BY p.Seq_No)

    FROM #Test_Table p

    CROSS APPLY (

    SELECT RowID = MAX(Seq_No)

    FROM #Test_Table

    WHERE CData = '*********************************************' AND Seq_No <= p.Seq_No

    ) x

    )

    Delete from t

    output deleted.Seq_No, deleted.CData

    into dbo.Test_Table_Bad_Records

    FROM OrderedData t

    WHERE t.RowID IN (

    SELECT r1.RowID

    FROM OrderedData r1

    INNER JOIN OrderedData r2

    ON r2.RowID = r1.RowID AND r2.rn = r1.rn + 1

    WHERE r1.CData = 'RateQuote' AND r2.CData = ' ')

    Thanks..

  • Hi Chris,

    Thanks for the nice feedback.

    I appreciate and like the solution you have provided. Your code is shorter too for which I like it even more.

    Just to see from a performance perspective, I tried executing both the code snippets in the same batch. The Query cost (relative to the batch) was 17% for the one in the article. The same for the alternative version was 83%.

    Notwithstanding the minor performance difference since the data volume in the code is very limited, I still like your code for its brevity.

    ------Thanks and Best Regards,

    Suresh.


    Kindest Regards,

    M Suresh Kumar

  • Suresh Kumar Maganti (11/22/2011)


    The Query cost (relative to the batch) was 17% for the one in the article. The same for the alternative version was 83%.

    The query cost relative to the batch actually doesn't mean much. It's quite easy to write queries that doe the same thing where one will be 100% of the batch and the other will be 0% and yet, when you run them, the 0% query take take several seconds where the 100% query runs so fast it almost isn't measureable.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (11/22/2011)


    Suresh Kumar Maganti (11/22/2011)


    The Query cost (relative to the batch) was 17% for the one in the article. The same for the alternative version was 83%.

    The query cost relative to the batch actually doesn't mean much. It's quite easy to write queries that doe the same thing where one will be 100% of the batch and the other will be 0% and yet, when you run them, the 0% query take take several seconds where the 100% query runs so fast it almost isn't measureable.

    I'm relatively new to SQL tuning. I haven't heard of that happening. Can you explain how or why that might happen? Thank you.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply