Recent PostsRecent Posts Popular TopicsPopular Topics
 Home Search Members Calendar Who's On

 Problem with writing a query. Kindly guide... Rate Topic Display Mode Topic Options
Author
 Message
 Posted Friday, September 27, 2013 12:41 AM
 Hall of Fame Group: General Forum Members Last Login: Wednesday, February 24, 2016 6:28 AM Points: 3,977, Visits: 6,431
 Continuing curiosity regarding this problem drove me to see how the proposed solutions scale, so I built a test harness.`CREATE TABLE #T (Rate int NOT NULL,Shift int NOT NULL,PRIMARY KEY (Rate, Shift));INSERT INTO #T ( Rate, Shift )VALUES (-1, 1), (-1, 2), (-1, 3);WITH Tally (n) AS( SELECT TOP 500000 ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM sys.all_columns a CROSS JOIN sys.all_columns b),STally (n) AS( SELECT TOP 10 ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM sys.all_columns a )INSERT INTO #TSELECT a.n, b.n FROM Tally aCROSS APPLY STally b;DELETE FROM #TWHERE (Rate < 20000 AND Shift = 1) OR (Rate > 400000 AND Shift = 3) OR (RATE BETWEEN 50000 AND 150000 AND Shift = 2);DECLARE @Rate INT = -1 ,@Holder INT ,@StartDT DATETIME;SELECT @StartDT=GETDATE();PRINT 'Solution by Hunchback';SET STATISTICS TIME ON;WITH C1 AS (SELECT ShiftFROM #TWHERE Rate = @Rate)SELECT @Holder=A.RateFROM #T AS A INNER JOIN C1 AS B ON A.Shift = B.ShiftGROUP BY A.RateHAVING COUNT(*) = (SELECT COUNT(*) FROM C1);SET STATISTICS TIME OFF;SELECT ElapsedMS=DATEDIFF(millisecond, @StartDT, GETDATE());SELECT @StartDT=GETDATE();PRINT 'Solution by Dwain.C';SET STATISTICS TIME ON;SELECT @Holder=RateFROM #TWHERE Shift IN (SELECT Shift FROM #T WHERE Rate = @Rate)GROUP BY RateHAVING COUNT(*) = (SELECT COUNT(*) FROM #T WHERE Rate = @Rate);SET STATISTICS TIME OFF;SELECT ElapsedMS=DATEDIFF(millisecond, @StartDT, GETDATE());SELECT @StartDT=GETDATE();PRINT 'Divide and Conquer Solution by Dwain.C';SET STATISTICS TIME ON;SELECT Shift INTO #T1FROM #T WHERE Rate = @Rate;DECLARE @Rows INT = @@ROWCOUNT;SELECT @Holder=RateFROM #TWHERE Shift IN (SELECT Shift FROM #T1)GROUP BY RateHAVING COUNT(*) = @Rows;SET STATISTICS TIME OFF;SELECT ElapsedMS=DATEDIFF(millisecond, @StartDT, GETDATE());SELECT @StartDT=GETDATE();PRINT 'Solution by Peso/Adam Machanic';SET STATISTICS TIME ON;WITH Target AS( SELECT Rate, Shift FROM #T WHERE Rate = @Rate)SELECT @Holder=c.RateFROM ( SELECT Rate, rc=COUNT(*) FROM #T GROUP BY Rate) aJOIN ( SELECT rc=COUNT(*) FROM Target) b ON b.rc <= a.rcJOIN #T AS c ON c.Rate = a.RateJOIN Target AS d ON d.Shift = c.ShiftGROUP BY c.RateHAVING COUNT(*) = MIN(b.rc);SET STATISTICS TIME OFF;SELECT ElapsedMS=DATEDIFF(millisecond, @StartDT, GETDATE());GODROP TABLE #T;DROP TABLE #T1;`From the statistics, I get these CPU/Elapsed time results:`Solution by Hunchback SQL Server Execution Times: CPU time = 1637 ms, elapsed time = 452 ms.Solution by Dwain.C SQL Server Execution Times: CPU time = 1467 ms, elapsed time = 473 ms.Divide and Conquer Solution by Dwain.C SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 1169 ms, elapsed time = 301 ms.Solution by Peso/Adam Machanic SQL Server Execution Times: CPU time = 1996 ms, elapsed time = 2235 ms.`And these results are quite similar to the results SELECTED into the 4 results panes for elapsed MS:`4604803062263`Comments:- I eliminated Sean's solution because I believe he's solving a different problem. But of course since the OP has confirmed that he prefers Sean's solution, I have to assume that both Hunchback and I read the requirements wrong.- The result for my last suggestion (Peso/Adam Machanic solution) was probably impacted by the <= I had to use in the first JOIN. I believe their solution was based on the no remainder problem posed by Mr. Celko.I was surprised that Divide and Conquer came in at about 30% faster. My mantra: No loops! No CURSORs! No RBAR! Hoo-uh!My thought question: Have you ever been told that your query runs too fast?My advice:INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.Need to UNPIVOT? Why not CROSS APPLY VALUES instead?Since random numbers are too important to be left to chance, let's generate some!Learn to understand recursive CTEs by example.Splitting strings based on patterns can be fast!My temporal SQL musings: Calendar Tables, an Easter SQL, Time Slots and Self-maintaining, Contiguous Effective Dates in Temporal Tables
Post #1499210
 Posted Friday, September 27, 2013 3:39 AM
 Right there with Babe Group: General Forum Members Last Login: Friday, April 15, 2016 10:16 PM Points: 755, Visits: 582
 Thanks Dwain.CKindly ignore my earlier post, I missed it completely. Actually, Other four solutions are giving accurate resultsMy results goes as below: 1.hunchbackWITH C1 AS (SELECT Shift FROM test12 WHERE Rate = 4)SELECT A.Rate FROM test12 AS A INNER JOIN C1 AS B ON A.Shift = B.ShiftGROUP BY A.Rate HAVING COUNT(*) = (SELECT COUNT(*) FROM C1);Table 'test12'. Scan count 2, logical reads 48 Query Cost relative to batch = 21%2.dwain.cSELECT Rate FROM test12WHERE Shift IN (SELECT Shift FROM test12 WHERE Rate = 4)GROUP BY Rate HAVING COUNT(*) = (SELECT COUNT(*) FROM test12 WHERE Rate = 4);Table 'test12'. Scan count 2, logical reads 48Query Cost relative to batch = 21%3.dwain.c (divide and concur)SELECT Shift INTO #T FROM @T WHERE Rate = @Rate; DECLARE @Rows INT = @@ROWCOUNT;SELECT Rate FROM @TWHERE Shift IN (SELECT Shift FROM #T)GROUP BY Rate HAVING COUNT(*) = @Rows;GODROP TABLE #T;Table 'test12'. Scan count 1, logical reads 2'#T_____00000000000D'. Scan count 1, logical reads 22Table 'test12'. Scan count 1, logical reads 2Query Cost relative to batch = 23%4.dwain.cWITH Target AS(SELECT Rate, Shift FROM test12 WHERE Rate = 4)SELECT c.RateFROM (SELECT Rate, rc=COUNT(*) FROM test12 GROUP BY Rate) aJOIN (SELECT rc=COUNT(*) FROM Target) b ON b.rc <= a.rcJOIN test12 AS c ON c.Rate = a.RateJOIN Target AS d ON d.Shift = c.ShiftGROUP BY c.RateHAVING COUNT(*) = MIN(b.rc);Table 'test12'. Scan count 8, logical reads 54,Query Cost relative to batch = 35%So, Cost wise, 1 and 2 are performing better then 3.ElapsedMS is as below:990956960 (Divide and Conquer)2723I could not get "Divide and Conquer" at first place.Thanks again everyone for putting your efforts.
Post #1499276
 Posted Friday, September 27, 2013 3:46 AM
 Hall of Fame Group: General Forum Members Last Login: Wednesday, February 24, 2016 6:28 AM Points: 3,977, Visits: 6,431
Post #1499284
 Posted Friday, September 27, 2013 7:47 AM
 SSCoach Group: General Forum Members Last Login: Today @ 1:36 PM Points: 16,138, Visits: 16,841
 Well I guess I didn't quite get the requirements. Chock that up to a lack of details. This thread could serve as a good example of why it is so important to provide those details. _______________________________________________________________Need help? Help us help you. Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.Need to split a string? Try Jeff Moden's splitter.Cross Tabs and Pivots, Part 1 – Converting Rows to Columns Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs Understanding and Using APPLY (Part 1)Understanding and Using APPLY (Part 2)
Post #1499383
 Posted Friday, September 27, 2013 7:54 AM
 Right there with Babe Group: General Forum Members Last Login: Friday, April 15, 2016 10:16 PM Points: 755, Visits: 582
 #3 (Divide and Conquer) has an advantage that will likely grow depending on the number of rows in your table (the more rows, the more the initial INSERT capturing the row count will help). I think anyway.I will keep that in mind.
Post #1499386

 Permissions