Using a Recursive CTE to Generate a List

  • This is an innovative way to generate a comma-delimited list, but performance is my main concern here. If you SET STATISTICS IO ON and turn on your actual execution plan, it becomes clear, quickly, that this is a very IO intensive query that performed 1170 reads, which is very high for the volume of data here. This would not scale well for larger data sets.

    Nearly any other solution world perform well---XML PATH/STUFF providing good IO, but a bit of a CPU hit...and some other CROSS APPLY/CROSS JOIN solutions a bit more read heavy. I'd love to envision a stick-built dynamic SQL statement that could generate the lists, but I'm not coming up with it this morning---but either way, watch performance to ensure this doesn't hamstring an application when the data gets much larger.

  • Ed Pollack (6/26/2015)


    This is an innovative way to generate a comma-delimited list, but performance is my main concern here. If you SET STATISTICS IO ON and turn on your actual execution plan, it becomes clear, quickly, that this is a very IO intensive query that performed 1170 reads, which is very high for the volume of data here. This would not scale well for larger data sets.

    Nearly any other solution world perform well---XML PATH/STUFF providing good IO, but a bit of a CPU hit...and some other CROSS APPLY/CROSS JOIN solutions a bit more read heavy. I'd love to envision a stick-built dynamic SQL statement that could generate the lists, but I'm not coming up with it this morning---but either way, watch performance to ensure this doesn't hamstring an application when the data gets much larger.

    The reads reported by Statistics IO are rows in this case rather than pages - nevertheless, your comments stand.

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • ChrisM@Work (6/26/2015)


    Ed Pollack (6/26/2015)


    This is an innovative way to generate a comma-delimited list, but performance is my main concern here. If you SET STATISTICS IO ON and turn on your actual execution plan, it becomes clear, quickly, that this is a very IO intensive query that performed 1170 reads, which is very high for the volume of data here. This would not scale well for larger data sets.

    Nearly any other solution world perform well---XML PATH/STUFF providing good IO, but a bit of a CPU hit...and some other CROSS APPLY/CROSS JOIN solutions a bit more read heavy. I'd love to envision a stick-built dynamic SQL statement that could generate the lists, but I'm not coming up with it this morning---but either way, watch performance to ensure this doesn't hamstring an application when the data gets much larger.

    The reads reported by Statistics IO are rows in this case rather than pages - nevertheless, your comments stand.

    You're not the first person that I trust to say it's rows rather than pages, Chris. That, not withstanding, do you have a Microsoft Link that states that or a different link that proves that?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • MSDB has pretty straightforward definitions for the data in question:

    https://msdn.microsoft.com/en-us/library/ms184361.aspx

    If their documentation is wrong, then I'd definitely like to see some sort of proof of that (query/IO measurements, etc...).

    Lastly, it's Friday, and SSC doesn't have a message icon for a martini glass. Not sure which of these problems is more significant πŸ™‚

    -Ed

    Jeff Moden (6/26/2015)


    ChrisM@Work (6/26/2015)


    Ed Pollack (6/26/2015)


    This is an innovative way to generate a comma-delimited list, but performance is my main concern here. If you SET STATISTICS IO ON and turn on your actual execution plan, it becomes clear, quickly, that this is a very IO intensive query that performed 1170 reads, which is very high for the volume of data here. This would not scale well for larger data sets.

    Nearly any other solution world perform well---XML PATH/STUFF providing good IO, but a bit of a CPU hit...and some other CROSS APPLY/CROSS JOIN solutions a bit more read heavy. I'd love to envision a stick-built dynamic SQL statement that could generate the lists, but I'm not coming up with it this morning---but either way, watch performance to ensure this doesn't hamstring an application when the data gets much larger.

    The reads reported by Statistics IO are rows in this case rather than pages - nevertheless, your comments stand.

    You're not the first person that I trust to say it's rows rather than pages, Chris. That, not withstanding, do you have a Microsoft Link that states that or a different link that proves that?

  • Jeff Moden (6/26/2015)


    ChrisM@Work (6/26/2015)


    Ed Pollack (6/26/2015)


    This is an innovative way to generate a comma-delimited list, but performance is my main concern here. If you SET STATISTICS IO ON and turn on your actual execution plan, it becomes clear, quickly, that this is a very IO intensive query that performed 1170 reads, which is very high for the volume of data here. This would not scale well for larger data sets.

    Nearly any other solution world perform well---XML PATH/STUFF providing good IO, but a bit of a CPU hit...and some other CROSS APPLY/CROSS JOIN solutions a bit more read heavy. I'd love to envision a stick-built dynamic SQL statement that could generate the lists, but I'm not coming up with it this morning---but either way, watch performance to ensure this doesn't hamstring an application when the data gets much larger.

    The reads reported by Statistics IO are rows in this case rather than pages - nevertheless, your comments stand.

    You're not the first person that I trust to say it's rows rather than pages, Chris. That, not withstanding, do you have a Microsoft Link that states that or a different link that proves that?

    Hi Jeff

    Given a little time I'll find a link from a reputable author, probably not from MS directly. These came up first in the search:

    http://blogs.msdn.com/b/craigfr/archive/2007/10/25/recursive-ctes.aspx

    http://blogs.msdn.com/b/craigfr/archive/2007/11/07/recursive-ctes-continued.aspx

    http://www.littlekendra.com/2011/01/04/tallytables/[/url]

    http://bradsruminations.blogspot.co.uk/2010/03/this-article-on-recurson-is-entitled.html

    rCTE's are driven by a stack spool which operate on a row-by-row basis. I can't yet find the statement similar to "stack spools don't work with pages, they work with rows and this is reflected in the high logical reads observed" - which I know I've read.

    There's a hint in this loosely related article by Paul White and similar articles. I'll keep searching because it would be a handy reference to have.

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Ed Pollack (6/26/2015)


    MSDB has pretty straightforward definitions for the data in question:

    https://msdn.microsoft.com/en-us/library/ms184361.aspx

    If their documentation is wrong, then I'd definitely like to see some sort of proof of that (query/IO measurements, etc...).

    Exactly. The point most clear on that page with no sign of any exceptions for rCTEs is...

    logical reads - Number of pages read from the data cache.

    If someone has demonstrable proof of otherwise, please provide the link or the code that does such a proof. And, no... I'm not saying any of that because I don't believe the claim especially since MS also didn't clearly document the horrible thing that happens when played against non-iTVF UDFs. I really want to know but I'm simply not going to take someone's word about it. I'd like to see the proof so that I can successfully replay the proof for others, if it's actually true, like the non-iTVF UDF problem many of us have proven.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (6/26/2015)


    Ed Pollack (6/26/2015)


    MSDB has pretty straightforward definitions for the data in question:

    https://msdn.microsoft.com/en-us/library/ms184361.aspx

    If their documentation is wrong, then I'd definitely like to see some sort of proof of that (query/IO measurements, etc...).

    Exactly. The point most clear on that page with no sign of any exceptions for rCTEs is...

    logical reads - Number of pages read from the data cache.

    If someone has demonstrable proof of otherwise, please provide the link or the code that does such a proof. And, no... I'm not saying any of that because I don't believe the claim especially since MS also didn't clearly document the horrible thing that happens when played against non-iTVF UDFs. I really want to know but I'm simply not going to take someone's word about it. I'd like to see the proof so that I can successfully replay the proof for others, if it's actually true, like the non-iTVF UDF problem many of us have proven.

    This little set of statements shows that logical reads reported by SET STATISTICS IO ON include reads from internal worktables (in this case a stack spool) as well as the data cache. Logical reads from the data cache compute nicely as pages. Reads from the worktable compute nicely as rows if I can find the recipe...

    IF OBJECT_ID('tempdb..#Temp') IS NOT NULL DROP TABLE #Temp

    SELECT

    rn = ROW_NUMBER() OVER(ORDER BY (SELECT NULL))

    INTO #Temp

    FROM (VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) d (n),

    (VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) e (n);

    CREATE UNIQUE CLUSTERED INDEX ucx_rn ON #Temp (rn);

    SET STATISTICS IO,TIME ON;

    SELECT * FROM #Temp WHERE rn = 1;

    SET STATISTICS IO,TIME OFF;

    -- Table '#Temp...00000001741F'. Scan count 0, logical reads 2, ...

    -- 2 pages

    SET STATISTICS IO,TIME ON;

    WITH rCTE_Scan AS (

    SELECT t.* FROM #Temp t WHERE rn = 1

    UNION ALL

    SELECT t.*

    FROM rCTE_Scan lr INNER JOIN #Temp t ON lr.rn+1 = t.rn

    )

    SELECT * FROM rCTE_Scan

    SET STATISTICS IO,TIME OFF

    -- Table '#Temp...00000001741F'. Scan count 0, logical reads 202, ...

    -- 2 pages * (100+1) = 202 (PW explains the extra read in the last reference)

    -- Table 'Worktable'. Scan count 2, logical reads 601, ...

    Whilst looking into this, I've found two quotes from PW (one of them in here) stating that logical reads of worktables are rows rather than pages - but still nothing from MS or white papers or whatever.

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Ed Pollack (6/26/2015)


    ...

    If their documentation is wrong...

    It isn't wrong at all - it's just incomplete.

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Very interesting. However, the stated result was for a comma delimeted countyID not countyName. When I changed countyName to CountyID I got a conversion error. Not sure why.

  • dfischer 36834 (6/26/2015)


    Very interesting. However, the stated result was for a comma delimeted countyID not countyName. When I changed countyName to CountyID I got a conversion error. Not sure why.

    Try casting CountyID as varchar.


    [font="Arial"]Low-hanging fruit picker and defender of the moggies[/font]

    For better assistance in answering your questions, please read this[/url].


    Understanding and using APPLY, (I)[/url] and (II)[/url] Paul White[/url]

    Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]

  • ChrisM@Work (6/26/2015)


    Jeff Moden (6/26/2015)


    Ed Pollack (6/26/2015)


    MSDB has pretty straightforward definitions for the data in question:

    https://msdn.microsoft.com/en-us/library/ms184361.aspx

    If their documentation is wrong, then I'd definitely like to see some sort of proof of that (query/IO measurements, etc...).

    Exactly. The point most clear on that page with no sign of any exceptions for rCTEs is...

    logical reads - Number of pages read from the data cache.

    If someone has demonstrable proof of otherwise, please provide the link or the code that does such a proof. And, no... I'm not saying any of that because I don't believe the claim especially since MS also didn't clearly document the horrible thing that happens when played against non-iTVF UDFs. I really want to know but I'm simply not going to take someone's word about it. I'd like to see the proof so that I can successfully replay the proof for others, if it's actually true, like the non-iTVF UDF problem many of us have proven.

    This little set of statements shows that logical reads reported by SET STATISTICS IO ON include reads from internal worktables (in this case a stack spool) as well as the data cache. Logical reads from the data cache compute nicely as pages. Reads from the worktable compute nicely as rows if I can find the recipe...

    IF OBJECT_ID('tempdb..#Temp') IS NOT NULL DROP TABLE #Temp

    SELECT

    rn = ROW_NUMBER() OVER(ORDER BY (SELECT NULL))

    INTO #Temp

    FROM (VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) d (n),

    (VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) e (n);

    CREATE UNIQUE CLUSTERED INDEX ucx_rn ON #Temp (rn);

    SET STATISTICS IO,TIME ON;

    SELECT * FROM #Temp WHERE rn = 1;

    SET STATISTICS IO,TIME OFF;

    -- Table '#Temp...00000001741F'. Scan count 0, logical reads 2, ...

    -- 2 pages

    SET STATISTICS IO,TIME ON;

    WITH rCTE_Scan AS (

    SELECT t.* FROM #Temp t WHERE rn = 1

    UNION ALL

    SELECT t.*

    FROM rCTE_Scan lr INNER JOIN #Temp t ON lr.rn+1 = t.rn

    )

    SELECT * FROM rCTE_Scan

    SET STATISTICS IO,TIME OFF

    -- Table '#Temp...00000001741F'. Scan count 0, logical reads 202, ...

    -- 2 pages * (100+1) = 202 (PW explains the extra read in the last reference)

    -- Table 'Worktable'. Scan count 2, logical reads 601, ...

    Whilst looking into this, I've found two quotes from PW (one of them in here) stating that logical reads of worktables are rows rather than pages - but still nothing from MS or white papers or whatever.

    Yeah... PW was the other person that I trust that I was speaking of on this subject.

    I'll check your demo code. Thanks for posting it. I'll also see if I can convince myself with a bit of code and, if I can, I'll post my findings one way or another so that others can have a go at it.

    It isn't wrong at all - it's just incomplete.

    It's funny how people make such excuses for such things. The documentation clearly states that logical reads are the number of pages read, period. If what you and Paul have said is true, then the documentation is actually wrong and needs to be fixed. πŸ˜‰

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Ok... Chris' code on the subject of logical reads gave me an idea for different code to prove it to myself (which is no easy task). Early findings support the idea that logical reads from a work table are row based and not page based. I have another proof that I want to do but I have a funeral that I have to go to. The "death" of this controversy is going to have to wait (for me anyway) and I'll do the "Alice's Restaurant" thing with code later this evening.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • If I ever thought that one comment couldn't lead to this much discussion, then I now certainly know better πŸ™‚

    Jeff Moden (6/27/2015)


    Ok... Chris' code on the subject of logical reads gave me an idea for different code to prove it to myself (which is no easy task). Early findings support the idea that logical reads from a work table are row based and not page based. I have another proof that I want to do but I have a funeral that I have to go to. The "death" of this controversy is going to have to wait (for me anyway) and I'll do the "Alice's Restaurant" thing with code later this evening.

  • I'm still working on it. Everything I come up with has had a hole in it.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Done some testing on rCTEs in the passed, couldn't correlate the logical reads to rows/recursions.

    😎

    Simple test code

    USE Test;

    GO

    SET NOCOUNT ON;

    /* Recursions from 1-20,30,40,50,60,70,80,90,100 */

    GO

    DECLARE @SAMPLE_SIZE INT = 1;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 2;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 3;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 4;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 5;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 6;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 7;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 8;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 9;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 10;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 11;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 12;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 13;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 14;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 15;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 16;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 17;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 18;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 19;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 20;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 30;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 40;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 50;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 60;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 70;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 80;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 90;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    DECLARE @SAMPLE_SIZE INT = 100;

    DECLARE @INT_BUCKET01 INT = 0;

    /* SIGNATURE_998 */

    ;WITH RECURSIVE_CTE AS

    ( SELECT 1 AS N UNION ALL

    SELECT X.N + 1 FROM RECURSIVE_CTE X

    WHERE X.N < @SAMPLE_SIZE )

    SELECT @INT_BUCKET01 = RC.N FROM RECURSIVE_CTE RC

    GO

    /* Get the stats from sys.dm_exec_query_stats */

    /* HIDE_THIS_ONE */

    SELECT

    DEQS.last_worker_time

    ,DEQS.last_physical_reads

    ,DEQS.last_logical_writes

    ,DEQS.last_logical_reads

    ,DEQS.last_elapsed_time

    ,DEQS.last_rows

    ,DEST.text

    ,DEQP.query_plan

    FROM sys.dm_exec_query_stats DEQS

    CROSS APPLY sys.dm_exec_sql_text(DEQS.sql_handle) DEST

    CROSS APPLY sys.dm_exec_query_plan(DEQS.plan_handle) DEQP

    WHERE DEST.text LIKE N'%SIGNATURE_998%'

    AND DEST.text NOT LIKE N'%HIDE_THIS_ONE%'

    ORDER BY DEQS.last_execution_time DESC;

    GO

    Results from my test system

    last_worker_time last_physical_reads last_logical_writes last_logical_reads last_elapsed_time last_rows

    -------------------- -------------------- -------------------- -------------------- -------------------- --------------------

    1016 0 2 905 1165 100

    935 0 2 815 1076 90

    1270 0 2 776 1475 80

    691 0 2 635 794 70

    656 0 2 545 752 60

    557 0 2 455 635 50

    476 0 2 365 540 40

    281 0 2 185 309 20

    380 0 2 275 426 30

    385 0 2 215 417 18

    307 0 2 176 339 19

    337 0 2 158 367 17

    298 0 2 149 327 16

    260 0 2 140 287 15

    251 0 2 131 275 14

    273 0 2 113 292 12

    286 0 2 104 308 11

    228 0 2 122 250 13

    819 0 2 140 854 10

    445 0 2 125 729 9

    318 0 2 77 338 8

    337 0 2 59 357 6

    353 0 2 68 376 7

    231 0 2 50 244 5

    225 0 2 41 235 4

    196 0 2 23 247 2

    192 0 2 32 200 3

    150 0 0 12 151 1

    Results graph

Viewing 15 posts - 31 through 45 (of 69 total)

You must be logged in to reply to this topic. Login to reply