• T.Ashish (9/27/2013)


    Thanks Dwain.C

    Kindly ignore my earlier post, I missed it completely. Actually, Other four solutions are giving accurate results

    My results goes as below:

    1.hunchback

    WITH C1 AS (SELECT Shift FROM test12 WHERE Rate = 4)

    SELECT A.Rate FROM test12 AS A INNER JOIN C1 AS B ON A.Shift = B.Shift

    GROUP BY A.Rate HAVING COUNT(*) = (SELECT COUNT(*) FROM C1);

    Table 'test12'. Scan count 2, logical reads 48

    Query Cost relative to batch = 21%

    2.dwain.c

    SELECT Rate FROM test12

    WHERE Shift IN (SELECT Shift FROM test12 WHERE Rate = 4)

    GROUP BY Rate HAVING COUNT(*) = (SELECT COUNT(*) FROM test12 WHERE Rate = 4);

    Table 'test12'. Scan count 2, logical reads 48

    Query Cost relative to batch = 21%

    3.dwain.c (divide and concur)

    SELECT Shift INTO #T FROM @T WHERE Rate = @Rate;

    DECLARE @Rows INT = @@ROWCOUNT;

    SELECT Rate FROM @T

    WHERE Shift IN (SELECT Shift FROM #T)

    GROUP BY Rate HAVING COUNT(*) = @Rows;

    GO

    DROP TABLE #T;

    Table 'test12'. Scan count 1, logical reads 2

    '#T_____00000000000D'. Scan count 1, logical reads 22

    Table 'test12'. Scan count 1, logical reads 2

    Query Cost relative to batch = 23%

    4.dwain.c

    WITH Target AS

    (SELECT Rate, Shift FROM test12 WHERE Rate = 4)

    SELECT c.Rate

    FROM

    (SELECT Rate, rc=COUNT(*) FROM test12 GROUP BY Rate) a

    JOIN

    (SELECT rc=COUNT(*) FROM Target) b ON b.rc <= a.rc

    JOIN test12 AS c ON c.Rate = a.Rate

    JOIN Target AS d ON d.Shift = c.Shift

    GROUP BY c.Rate

    HAVING COUNT(*) = MIN(b.rc);

    Table 'test12'. Scan count 8, logical reads 54,

    Query Cost relative to batch = 35%

    So, Cost wise, 1 and 2 are performing better then 3.

    ElapsedMS is as below:

    990

    956

    960 (Divide and Conquer)

    2723

    I could not get "Divide and Conquer" at first place.

    Thanks again everyone for putting your efforts.

    #1 and 2 have effectively identical execution plans so any timing variances you're seeing with those is probably random and multiple testing runs probably would swing one way or the other.

    #3 (Divide and Conquer) has an advantage that will likely grow depending on the number of rows in your table (the more rows, the more the initial INSERT capturing the row count will help). I think anyway.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St