Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

Dealing with Incomplete Data - A T-SQL Puzzle 1 Expand / Collapse
Author
Message
Posted Monday, November 21, 2011 9:28 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Thursday, June 27, 2013 9:51 AM
Points: 141, Visits: 113
Comments posted to this topic are about the item Dealing with Incomplete Data - A T-SQL Puzzle 1


Kindest Regards,

M Suresh Kumar

Post #1209814
Posted Tuesday, November 22, 2011 2:28 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Today @ 4:23 AM
Points: 6,751, Visits: 12,849
What a really nicely written article to start the day off with, very enjoyable.

Suresh, have you tried using CROSS APPLY to partition the rows into "records" as an alternative to calculating the upper and lower bound? You can then sequence the rows within each "record", then by joining the result to itself (staggered by one row), identification of missing data is quite simple:

;WITH OrderedData AS (
SELECT
Seq_No,
cdata,
x.RowID,
rn = ROW_NUMBER() OVER(PARTITION BY x.RowID ORDER BY p.Seq_No)
FROM #Test_Table p
CROSS APPLY (
SELECT RowID = MAX(Seq_No)
FROM #Test_Table
WHERE CData = '*********************************************' AND Seq_No <= p.Seq_No
) x
)
SELECT t.*
FROM OrderedData t
WHERE t.RowID IN (
SELECT r1.RowID
FROM OrderedData r1
INNER JOIN OrderedData r2
ON r2.RowID = r1.RowID AND r2.rn = r1.rn + 1
WHERE r1.CData = 'RateQuote' AND r2.CData = ' ')

I wonder how this would perform against the original?

Cheers

ChrisM


“Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
Exploring Recursive CTEs by Example Dwain Camps
Post #1209917
Posted Tuesday, November 22, 2011 3:06 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, February 13, 2014 3:23 AM
Points: 10, Visits: 68
Dear Sir,
Very good use of CTE. I learn to know the real strength of CTE after reading this article.

The best thing about CTE is that it keeps the readability of the code. Its very easy to understand the approach of a developer in the solutions based on CTE feature.

and also the real life Scenario u have given here helps me to understand the kind of problems and real world situation that can arise before DBA.

I am new in this forum and I want to be good DBA.

Now i know that i am going to learn a lot here.


Thanks...
Post #1209934
Posted Tuesday, November 22, 2011 4:02 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, February 13, 2014 3:23 AM
Points: 10, Visits: 68
Dear Sir,
This is really very good alternative solution.

Can't I delete the selected bad rows in the same statement like this....

WITH OrderedData AS (
SELECT
Seq_No,
cdata,
x.RowID,
rn = ROW_NUMBER() OVER(PARTITION BY x.RowID ORDER BY p.Seq_No)
FROM #Test_Table p
CROSS APPLY (
SELECT RowID = MAX(Seq_No)
FROM #Test_Table
WHERE CData = '*********************************************' AND Seq_No <= p.Seq_No
) x
)
Delete from t
output deleted.Seq_No, deleted.CData
into dbo.Test_Table_Bad_Records
FROM OrderedData t
WHERE t.RowID IN (
SELECT r1.RowID
FROM OrderedData r1
INNER JOIN OrderedData r2
ON r2.RowID = r1.RowID AND r2.rn = r1.rn + 1
WHERE r1.CData = 'RateQuote' AND r2.CData = ' ')


Thanks..
Post #1209977
Posted Tuesday, November 22, 2011 3:04 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Thursday, June 27, 2013 9:51 AM
Points: 141, Visits: 113
Hi Chris,
Thanks for the nice feedback.
I appreciate and like the solution you have provided. Your code is shorter too for which I like it even more.
Just to see from a performance perspective, I tried executing both the code snippets in the same batch. The Query cost (relative to the batch) was 17% for the one in the article. The same for the alternative version was 83%.
Notwithstanding the minor performance difference since the data volume in the code is very limited, I still like your code for its brevity.

------Thanks and Best Regards,
Suresh.



Kindest Regards,

M Suresh Kumar

Post #1210585
Posted Tuesday, November 22, 2011 10:07 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Yesterday @ 10:21 PM
Points: 35,951, Visits: 30,235
Suresh Kumar Maganti (11/22/2011)
The Query cost (relative to the batch) was 17% for the one in the article. The same for the alternative version was 83%.


The query cost relative to the batch actually doesn't mean much. It's quite easy to write queries that doe the same thing where one will be 100% of the batch and the other will be 0% and yet, when you run them, the 0% query take take several seconds where the 100% query runs so fast it almost isn't measureable.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1210693
Posted Wednesday, November 23, 2011 9:43 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Thursday, April 10, 2014 5:11 PM
Points: 8, Visits: 273
Jeff Moden (11/22/2011)
Suresh Kumar Maganti (11/22/2011)
The Query cost (relative to the batch) was 17% for the one in the article. The same for the alternative version was 83%.


The query cost relative to the batch actually doesn't mean much. It's quite easy to write queries that doe the same thing where one will be 100% of the batch and the other will be 0% and yet, when you run them, the 0% query take take several seconds where the 100% query runs so fast it almost isn't measureable.


I'm relatively new to SQL tuning. I haven't heard of that happening. Can you explain how or why that might happen? Thank you.
Post #1211165
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse