Finding “Active” Rows for the Previous Month (SQL Spackle)

  • WayneS (1/30/2014)


    Additionally, someone else may be doing the work "when it counts", and they just might copy/re-use the poorer quality code that you've written because it does do the job. Then they will be struggling to figure out why the code has performance issues.

    That's one of the most important points to be made and thanks for bringing that up, Wayne. It's not likely that people will add a comment to their code that says "Hey! I was in a hurry for a one off so don't use this code if it actually counts!".

    The other point you made is also important. Once you've learned the right way to do something, why do it any other way? The right way is faster and frequently much shorter to boot.

    Thanks for stopping by and for the great points you made.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks for the Spackle Jeff, I always read your articles.

    Just thinking a bit further, is the clustered index enough to really take advantage of those end and start date intersections? You bring up the non-Sargable point, I'm just interested in any indexing strategy recommendations.

  • davoscollective (2/2/2014)


    Just thinking a bit further, is the clustered index enough to really take advantage of those end and start date intersections? You bring up the non-Sargable point, I'm just interested in any indexing strategy recommendations.

    Absolutely.

    First, I absolutely agree that human perception is, especially on today's very fast multi-processor machines, that the non-SARGable methods are "good enough" and, on smaller tables, the differences between non-SARGable and SARGable methods won't be appreciated. We'll get back to that perception in just a minute.

    Let's build some substantial test data. This builds a no-longer-so-big 5 million row table of data including a very supportive clustered index. On my lap top, it takes only 20 seconds so don't let the size scare you into not doing the tests.

    --============================================================================================

    -- Create a larger test table with the appropriate index

    --============================================================================================

    --===== Do this in a nice, safe place that everyone has.

    USE tempdb

    ;

    --===== Conditionally drop the test table to make reruns easier in SSMS.

    -- Rerun this after testing to cleanup TempDB

    IF OBJECT_ID('tempdb.dbo.TestTable','U') IS NOT NULL

    DROP TABLE tempdb.dbo.TestTable

    ;

    GO

    --===== Create and populate the test table on-the-fly.

    -- The StartDates all have random values for a 5 year period including

    -- all dates from 2010 up to and NOT including 2015.

    -- The EndDates will be from 0 to not quite 90 days later than the StartDate.

    WITH

    cteStartDate AS

    (

    SELECT TOP 5000000

    StartDate = DATEADD(dd,ABS(CHECKSUM(NEWID()))%DATEDIFF(dd,'2010','2015'),'2010')

    FROM master.sys.all_columns ac1

    CROSS JOIN master.sys.all_columns ac2

    )

    SELECT StartDate

    ,EndDate = DATEADD(dd,ABS(CHECKSUM(NEWID()))%90,fd.StartDate)

    ,SomeCol01 = NEWID() --Just to have something else in the table

    ,SomeCol02 = NEWID() --Just to have something else in the table

    INTO dbo.TestTable

    FROM cteStartDate fd

    ;

    --===== Add the index in question

    CREATE CLUSTERED INDEX IX_TestTable

    ON dbo.TestTable (StartDate,EndDate)

    ;

    GO

    Now, let's test the SARGable method from the article and two non-SARGable methods from the discussions above. Note that displaying to the screen is known as the "great equalizer" insofar as duration goes because it takes roughly the same time to display the same amount of data no matter the source so I've directed the output to other Temp Tables. Ostensibly, since we'll find more than 200 thousand rows in this test, it wouldn't be directed to the screen anyway. The results would be used for something else and the storage into the Temp Tables is a good representation of that.

    --============================================================================================

    -- Test the code

    --============================================================================================

    --===== Find all rows that are "active" anytime in the desired month

    PRINT '========== SARGable Method from Article ==============================';

    SET STATISTICS TIME,IO ON;

    SELECT *

    INTO #Test1

    FROM dbo.TestTable

    WHERE EndDate >= DATEADD(mm,DATEDIFF(mm,0,'Oct 2013') ,0) --Finds first of month

    AND StartDate < DATEADD(mm,DATEDIFF(mm,0,'Oct 2013')+1,0); --Finds first of next month

    SET STATISTICS TIME,IO OFF;

    PRINT '========== Non-SARGable Method 1 =====================================';

    PRINT '========== Note that it returns the wrong number of rows.'

    SET STATISTICS TIME,IO ON;

    SELECT *

    INTO #Test2

    FROM dbo.TestTable

    WHERE convert(char(10),EndDate,102) >= cast('01 Oct 2013' as datetime)

    AND convert(char(10),StartDate,102) <= cast('01 Nov 2013' as datetime) --Need correction here

    SET STATISTICS TIME,IO OFF;

    PRINT '========== Non-SARGable Method 2 =====================================';

    SET STATISTICS TIME,IO ON;

    select *

    INTO #Test3

    from dbo.TestTable

    where datediff(month,startdate,'oct 2013') >= 0

    and datediff(month,'oct 2013',enddate) >= 0

    SET STATISTICS TIME,IO OFF;

    --===== Drop the test results tables

    DROP TABLE #Test1, #Test2, #Test3

    ;

    Here are the run results from the statistics.

    ========== SARGable Method from Article ==============================

    Table 'TestTable'. Scan count 1, logical reads 30827, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 296 ms, elapsed time = 335 ms.

    (206479 row(s) affected)

    ========== Non-SARGable Method 1 =====================================

    ========== Note that it returns the wrong number of rows.

    Table 'TestTable'. Scan count 3, logical reads 40670, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 4119 ms, elapsed time = 2587 ms.

    (209173 row(s) affected)

    ========== Non-SARGable Method 2 =====================================

    Table 'TestTable'. Scan count 3, logical reads 40758, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 2589 ms, elapsed time = 1404 ms.

    (206479 row(s) affected)

    Going back to my first comments on this post, human perception is that the non-SARGable methods are "good enough" but that's only because a lot of folks aren't looking at the "big picture" when they're writing a single piece of code.

    For example, the SARGable method uses almost 13 times less CPU than non-SARGable Method 1 and almost 8 times less CPU than non-SARGable Method 2. The SARGable method is also almost 8 times faster than non-SARGable Method 1 and more than 3 times faster than non-SARGable method 2. It's important to note that to get this kind of performance from the non-SARGable methods, parallelism had to come into play.

    If we take the better of the 2 non-SARGable methods, we see the SARGable method uses 8 times less CPU, didn't require parallelism and ran more than 3 times faster.

    Now, let me ask you, if all of your code made "only" (8 times less CPU and 3 times faster is nothing to sneeze at) such a small improvement (and some SARGable code can be tens, hundreds, and sometimes thousands of times more efficient), would you have any performance problems on your databases? With the possible exception of SSDs, do you think you can buy a server that will run 3 times faster or use 8 times less CPU? And since pricing of SQL Server is by core (as of 2012), would you save a little money on licensing of SQL Server if all your code could run without needing parallelism for speed?

    Like Granny used to say, "Mind the pennies and the dollars will take care of themselves". "Good enough", according to human perception, rarely is. With respect to the tired old saws about something being a one off or there being some sort of supposed "guarantee" that the table will "never grow", my question would be... now that folks know the right way on something like this and the right way actually takes less code, why wouldn't you do it the right way all the time? 😉

    Shifting gears to answer the question about the Clustered Index, change the index definition in the test table build code to a non clustered index, rerun that code to rebuild the test table, and then rerun the test code to see what happens. Here's what I get.

    ========== SARGable Method from Article ==============================

    Table 'TestTable'. Scan count 3, logical reads 35213, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 608 ms, elapsed time = 259 ms.

    (206343 row(s) affected)

    ========== Non-SARGable Method 1 =====================================

    ========== Note that it returns the wrong number of rows.

    Table 'TestTable'. Scan count 3, logical reads 35213, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 4711 ms, elapsed time = 2275 ms.

    (209113 row(s) affected)

    ========== Non-SARGable Method 2 =====================================

    Table 'TestTable'. Scan count 3, logical reads 35213, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 2902 ms, elapsed time = 1420 ms.

    (206343 row(s) affected)

    If you ran the test code with the Actual Execution plan turned on, you'll see that all 3 execution plans are identical except for the percents associated with each symbol. You'll also note that the "Percent of Batch" value for the SARGable code is twice as high as the other two (a good reminder that it's still just an estimation even in the Actual Execution Plan) and yet it STILL blows the doors off the other two methods.

    Lets try the same thing with no indexes on the test table at all! Remove the index build code and rerun the test table build code and then the test code. Here's what I get.

    ========== SARGable Method from Article ==============================

    Table 'TestTable'. Scan count 3, logical reads 35213, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    (206704 row(s) affected)

    (1 row(s) affected)

    SQL Server Execution Times:

    CPU time = 639 ms, elapsed time = 250 ms.

    ========== Non-SARGable Method 1 =====================================

    ========== Note that it returns the wrong number of rows.

    Table 'TestTable'. Scan count 3, logical reads 35213, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    (209506 row(s) affected)

    (1 row(s) affected)

    SQL Server Execution Times:

    CPU time = 4773 ms, elapsed time = 2298 ms.

    ========== Non-SARGable Method 2 =====================================

    Table 'TestTable'. Scan count 3, logical reads 35213, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    (206704 row(s) affected)

    (1 row(s) affected)

    SQL Server Execution Times:

    CPU time = 3120 ms, elapsed time = 1515 ms.

    EVEN WITH NO INDEXES, the SARGable code STILL blows the doors off the other 2!

    So, what should the indexing strategy be here? As with all else in SQL Server, "It Depends". There is one exception to that rule though. Well written code will almost always beat no-so-well-written code and that's the first thing that anyone should concentrate on. It just so happens that writing SARGable code is usually well written code.

    So the indexing strategy might me anything from "no indexes" to "build the clustered index to support date sensitive queries like this one". "It Depends" on what kind of speed you need for non-date sensitive queries on the same table. And, remember that all indexes are a duplication of data except for the leaf level of the clustered index. To save disk space, reduce index rebuild times, and reduce backup space and times, you might actually opt to NOT build an index to support this query especially if it's the only one of it's kind. Even adding the clustered index didn't actually reduce the number of reads for this type of query so maybe no index is the best solution here. 😀

    Depending on what else is going on and whether or not you want to return a smaller number of columns, a non-clustered index will pay off even more than the clustered index because it's more narrow and has fewer pages to read. If we change the test table back to a non-clustered index and change the test code so that each only returns the two dates instead of using a *, here's what I get which is a pretty good improvement across the board even though the 2 non-SARGable methods still did index scans instead of seeks. This is proof positive that small covering indexes are a really good thing so long as you remember than all non-clustered indexes are a duplication of data and usually require more maintenance than clustered indexes because they're usually not using the same order as the data in the table (clustered index or heap).

    ========== SARGable Method from Article ==============================

    Table 'TestTable'. Scan count 3, logical reads 14583, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    (207103 row(s) affected)

    (1 row(s) affected)

    SQL Server Execution Times:

    CPU time = 328 ms, elapsed time = 170 ms.

    ========== Non-SARGable Method 1 =====================================

    ========== Note that it returns the wrong number of rows.

    Table 'TestTable'. Scan count 3, logical reads 18888, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    (209863 row(s) affected)

    (1 row(s) affected)

    SQL Server Execution Times:

    CPU time = 4788 ms, elapsed time = 2532 ms.

    ========== Non-SARGable Method 2 =====================================

    Table 'TestTable'. Scan count 3, logical reads 18916, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    (207103 row(s) affected)

    (1 row(s) affected)

    SQL Server Execution Times:

    CPU time = 2716 ms, elapsed time = 1609 ms.

    The only other question would be, what happens if we have a clustered index on something like an identity column, a non-clustered index on the dates as before, and we want to return some non-covered columns. The answer is that all of the code will revert to a Clustered Index Scan (same as a table scan) and the well written SARGable code will still win.

    --============================================================================================

    -- Create a larger test table with the appropriate index

    --============================================================================================

    --===== Do this in a nice, safe place that everyone has.

    USE tempdb

    ;

    --===== Conditionally drop the test table to make reruns easier in SSMS.

    -- Rerun this after testing to cleanup TempDB

    IF OBJECT_ID('tempdb.dbo.TestTable','U') IS NOT NULL

    DROP TABLE tempdb.dbo.TestTable

    ;

    GO

    --===== Create and populate the test table on-the-fly.

    -- The StartDates all have random values for a 5 year period including

    -- all dates from 2010 up to and NOT including 2015.

    -- The EndDates will be from 0 to not quite 90 days later than the StartDate.

    WITH

    cteStartDate AS

    (

    SELECT TOP 5000000

    StartDate = DATEADD(dd,ABS(CHECKSUM(NEWID()))%DATEDIFF(dd,'2010','2015'),'2010')

    FROM master.sys.all_columns ac1

    CROSS JOIN master.sys.all_columns ac2

    )

    SELECT RowNum = IDENTITY(INT,1,1)

    ,StartDate

    ,EndDate = DATEADD(dd,ABS(CHECKSUM(NEWID()))%90,fd.StartDate)

    ,SomeCol01 = NEWID() --Just to have something else in the table

    ,SomeCol02 = NEWID() --Just to have something else in the table

    INTO dbo.TestTable

    FROM cteStartDate fd

    ;

    --===== Add the clustered index as a PK

    ALTER TABLE dbo.TestTable

    ADD PRIMARY KEY CLUSTERED (RowNum)

    ;

    --===== Add the index in question

    CREATE NONCLUSTERED INDEX IX_TestTable

    ON dbo.TestTable (StartDate,EndDate)

    ;

    GO

    --============================================================================================

    -- Test the code

    --============================================================================================

    --===== Find all rows that are "active" anytime in the desired month

    PRINT '========== SARGable Method from Article ==============================';

    SET STATISTICS TIME,IO ON;

    SELECT *

    INTO #Test1

    FROM dbo.TestTable

    WHERE EndDate >= DATEADD(mm,DATEDIFF(mm,0,'Oct 2013') ,0) --Finds first of month

    AND StartDate < DATEADD(mm,DATEDIFF(mm,0,'Oct 2013')+1,0); --Finds first of next month

    SET STATISTICS TIME,IO OFF;

    PRINT '========== Non-SARGable Method 1 =====================================';

    PRINT '========== Note that it returns the wrong number of rows.'

    SET STATISTICS TIME,IO ON;

    SELECT *

    INTO #Test2

    FROM dbo.TestTable

    WHERE convert(char(10),EndDate,102) >= cast('01 Oct 2013' as datetime)

    AND convert(char(10),StartDate,102) <= cast('01 Nov 2013' as datetime) --Need correction here

    SET STATISTICS TIME,IO OFF;

    PRINT '========== Non-SARGable Method 2 =====================================';

    SET STATISTICS TIME,IO ON;

    select *

    INTO #Test3

    from dbo.TestTable

    where datediff(month,startdate,'oct 2013') >= 0

    and datediff(month,'oct 2013',enddate) >= 0

    SET STATISTICS TIME,IO OFF;

    --===== Drop the test results tables

    DROP TABLE #Test1, #Test2, #Test3

    ;

    (5000000 row(s) affected)

    ========== SARGable Method from Article ==============================

    Table 'TestTable'. Scan count 3, logical reads 38189, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 577 ms, elapsed time = 280 ms.

    (206772 row(s) affected)

    ========== Non-SARGable Method 1 =====================================

    ========== Note that it returns the wrong number of rows.

    Table 'TestTable'. Scan count 3, logical reads 38077, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 4634 ms, elapsed time = 2279 ms.

    (209462 row(s) affected)

    ========== Non-SARGable Method 2 =====================================

    Table 'TestTable'. Scan count 3, logical reads 38145, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 2714 ms, elapsed time = 1534 ms.

    (206772 row(s) affected)

    So, the very bottom line is, "It Depends". If you need to return more columns than you're willing to put into a covering index because of the duplication of data that all non-clustered indexes cause, then the clustered index on the two dates will pay off handsomely. Otherwise, having a narrow non-clustered index is the best bet and having no index at all isn't bad, either. The performance is in the code. As Dwain Camps might say, "Putting an index on a table is like putting sugar on cat food. It might taste better but you still might not want to eat it." 😛

    Sorry about the long winded answer but it was an excellent question that required a deep dive to adequately explain.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Wow thanks for what was effectively part 2 in this spackle 🙂 You've written an article in its own right to answer my question and I am very grateful, as should anyone else reading this, especially those not already familiar with SARG-able queries, or still lured by the glamour of writing complicated looking code versus performance code.

    Thank you for your test examples too. The team I am working with are looking at a DW with 500m+ rows of time interval data with a requirement to run calculations on user-selected datetime slices, sub-sliced into smaller regular intervals for reporting purposes, so this is very relevant.

  • Very nice article Jeff and I put in my vote to bump the rating accordingly.

    As it turns out, I ran across this problem early in my SQLing. I like your breakdown into negative and positive approaches. Those may not be intuitive to newbies. I was fortunate in that I realized the positive approach relatively quickly.

    The unfortunate aspect of this, is the problem becomes somewhat more complex when you have open (NULL) start and/or end dates. And that is something I find pretty often. It is very tempting to come up with a non-SARGable solution in those cases. I assume your follow on will elucidate that case. I'd have to go back and check what I did with it. It's been awhile now.

    The article was so good I Tweeted it to my miniscule (but elite) following.

    BTW. The quote was close but not quite, although you did manage to get the point across. 😀


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • Thanks for the feedback and the Tweet, Dwaine.

    I guess enough people have made the mistake of allowing NULLable StartDates (seriously??? :blink:) and/or EndDates that it warrants a follow up article on the subject. Maybe not... maybe I'll just append the following to an already too long Spackle article. 🙂

    In the mean time, OR is SARGable when done correctly. To wit...

     SELECT *
    INTO #Test1
    FROM dbo.TestTable
    WHERE (EndDate >= DATEADD(mm,DATEDIFF(mm,0,'Oct 2013') ,0) --Finds first of month
    OR EndDate IS NULL)
    AND (StartDate < DATEADD(mm,DATEDIFF(mm,0,'Oct 2013')+1,0) --Finds first of next month
    OR StartDate IS NULL)
    ;

    If you look at the Actual Execution plan for the code above, you still get a nice Index Seek even when a small number of rows is present.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Excellent article Jeff - and rated as such.

    -------------------------------Posting Data Etiquette - Jeff Moden [/url]Smart way to ask a question
    There are naive questions, tedious questions, ill-phrased questions, questions put after inadequate self-criticism. But every question is a cry to understand (the world). There is no such thing as a dumb question. ― Carl Sagan
    I would never join a club that would allow me as a member - Groucho Marx

  • Stuart Davies (2/3/2014)


    Excellent article Jeff - and rated as such.

    Thanks, Stuart. I appreciate it. Thanks for the read.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • davoscollective (2/2/2014)


    Wow thanks for what was effectively part 2 in this spackle 🙂 You've written an article in its own right to answer my question and I am very grateful, as should anyone else reading this, especially those not already familiar with SARG-able queries, or still lured by the glamour of writing complicated looking code versus performance code.

    Thank you for your test examples too. The team I am working with are looking at a DW with 500m+ rows of time interval data with a requirement to run calculations on user-selected datetime slices, sub-sliced into smaller regular intervals for reporting purposes, so this is very relevant.

    Thank you very much for the feedback. You asked an excellent question and I'm really glad I could help especially considering the volume of the table you're talking about. If you have the chance, I'd love to hear what you ended up doing with that table. It might even be worth an "here's what we did" article on your part because a lot of people are in the same situation.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Wonderful article Jeff.

    Once Again!! As Usual!!!

    "Keep Trying"

  • davoscollective (2/2/2014)


    Wow thanks for what was effectively part 2 in this spackle 🙂 You've written an article in its own right to answer my question...

    I completely agree. That's one of the best follow-up posts that I've seen in quite a while here.

    Jeff - I would suggest that you take that post, and make it into its own article. It's likely it will be read more than being stuck here in a follow up post on this article (you know how a lot of folks don't read the ensuing discussion), and this point is so valuable to make.

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • WayneS (2/5/2014)

    Jeff - I would suggest that you take that post, and make it into its own article. It's likely it will be read more than being stuck here in a follow up post on this article (you know how a lot of folks don't read the ensuing discussion), and this point is so valuable to make.

    I used to mainly read the articles; but I've found that I often learn a lot from the follow-up posts. But of course, anyone reading this has already figured that out.

Viewing 12 posts - 31 through 41 (of 41 total)

You must be logged in to reply to this topic. Login to reply