date fillers - dates in two rows

  • ekant_alone - Thursday, August 31, 2017 4:07 PM

    yes i am allowed to. But i am confused how i can help my case using IVTF here. Can you please help me. I am super stuck.  thank you

    I can't speak for Jeff, but I imagine that he was going to offer you his tally function code.

    In the meantime, here's the one I use... I imagine they are both pretty similar...
    CREATE FUNCTION dbo.tfn_Tally
    /* ============================================================================
    07/20/2017 JL, Created. Capable of creating a sequense of rows
                    ranging from -10,000,000,000,000,000 to 10,000,000,000,000,000
    ============================================================================ */
    (
        @NumOfRows BIGINT,
        @StartWith BIGINT
    )
    RETURNS TABLE WITH SCHEMABINDING AS
    RETURN
        WITH
            cte_n1 (n) AS (SELECT 1 FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) n (n)),    -- 10 rows
            cte_n2 (n) AS (SELECT 1 FROM cte_n1 a CROSS JOIN cte_n1 b),                                -- 100 rows
            cte_n3 (n) AS (SELECT 1 FROM cte_n2 a CROSS JOIN cte_n2 b),                                -- 10,000 rows
            cte_n4 (n) AS (SELECT 1 FROM cte_n3 a CROSS JOIN cte_n3 b),                                -- 100,000,000 rows
            cte_Tally (n) AS (
                SELECT TOP (@NumOfRows)
                    (ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1) + @StartWith
                FROM
                    cte_n4 a CROSS JOIN cte_n4 b                                                    -- 10,000,000,000,000,000 rows
                )
        SELECT
            t.n
        FROM
            cte_Tally t;
    GO

  • ekant_alone - Thursday, August 31, 2017 4:07 PM

    yes i am allowed to. But i am confused how i can help my case using IVTF here. Can you please help me. I am super stuck.  thank you

    I could tell by your use of a WHILE loop that you're confused. 😉  I needed to know if you could use iTVFs so that we can get away from using a WHILE loop, have a much more substantial range for a "Tally" sequence, and to provide you and others with a tool so that you don't have to keep reinventing the wheel when it comes to using a "Pseudo Cursor" replacement, such as the function I'm providing, to replace such WHILE loop functionality.  This also blows the doors off of rCTEs (Recursive CTEs) that do the same thing.

    With that being said, run the following code to build the dbo.fnTallyProg ("Prog" as in "Programmable") function.


     CREATE FUNCTION dbo.fnTallyProg
    /**********************************************************************************************************************
     Purpose:
     Given a start value, end value, and increment, create a sequencial list of integers.

     Programmers Notes:
     1. The increment can be negative if the start value is greater than the end value. In other words, it can count down
        as well as up.

     Revison History:
     Rev 00 - 18 Feb 2017 - Jeff Moden
            - Rewrite original to take start, end, and increment parameters.
    **********************************************************************************************************************/
            (
             @pStart     BIGINT
            ,@pEnd       BIGINT
            ,@pIncrement BIGINT
            )
    RETURNS TABLE WITH SCHEMABINDING AS
     RETURN WITH
     E01(N)   AS (SELECT NULL FROM (VALUES (NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL))E0(N)) --10 rows
    ,E04(N)   AS (SELECT NULL FROM E01 a CROSS JOIN E01 b CROSS JOIN E01 c CROSS JOIN E01 d) --10 Thousand rows
    ,E16(N)   AS (SELECT NULL FROM E04 a CROSS JOIN E04 b CROSS JOIN E04 c CROSS JOIN E04 d) --10 Quadrillion rows, which is crazy
    ,Tally(N) AS (SELECT TOP (ABS((@pEnd-@pStart+@pIncrement)/@pIncrement))
                         N = ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
                    FROM E16
                   WHERE (@pStart<=@pEnd AND @pIncrement > 0)
                      OR (@pStart>=@pEnd AND @pIncrement < 0)
                   ORDER BY N
                 )
          SELECT TOP (ABS((@pEnd-@pStart+@pIncrement)/@pIncrement))
                 N = (t.N-1)*@pIncrement+@pStart
            FROM Tally t
           ORDER BY t.N
    ;

    Now that you've created the "Swiss Army Knife" function for SQL Server, here's the code that others provided to create your test data.  Please stop posting graphics.  They are virtually of no use to us when it comes to test data and make the answers much slower in coming.  Help us help you...  If you have another data example to add, do it in this code.


    --=====================================================================================================================
    --      Create the given test data.
    --      This is NOT a part of the solution.  We're just simulating the real table here.
    --      If you have any additions, stop posting useless graphics and add the data to this code.
    --=====================================================================================================================
    --===== If the test table already exists, drop it to make reruns in SSMS easier.
         IF OBJECT_ID('tempdb..#OutputResultSet','U') IS NOT NULL
            DROP TABLE #OutputResultSet
    ;
    GO
    --===== Create the test table.  Lord only knows what the PK should actually be. Probably Pubsourceid and StartDate,
         -- so that's what I used.
     CREATE TABLE #OutputResultSet
            (
             value              INT         NOT NULL
            ,dataitemuniverseid INT         NOT NULL
            ,startdate          DATETIME    NOT NULL
            ,Pubsourceid        VARCHAR(50) NOT NULL
            ,PubsourceName      VARCHAR(50) NOT NULL
            )
    ;
    --===== Populate the test table with test data.
         -- Add any new data here instead of posting bloody graphics, which are mostly useless.
     INSERT INTO #OutputResultSet
            (value, dataitemuniverseid, startdate, Pubsourceid, PubsourceName)
     VALUES
             (   0, 32833, '2017-08-07 11:00:00.000', 147942, 'EL')
            ,(   0, 32835, '2017-08-08 11:00:00.000', 147942, 'EL')
            ,( 230, 32835, '2017-08-11 14:00:00.000', 147942, 'EL')
            ,( 230, 32833, '2017-08-12 14:00:00.000', 147942, 'EL')
            ,( 158, 32832, '2016-08-01 23:00:00.000', 280159, 'ENT')
            ,(   0, 32832, '2016-08-03 11:00:00.000', 280159, 'ENT')
            ,( 158, 32832, '2016-08-05 15:00:00.000', 280159, 'ENT')
            ,(   0, 32830, '2016-08-07 23:00:00.000', 280159, 'ENT')
            ,( 158, 32830, '2016-08-09 23:00:00.000', 280159, 'ENT')
            ,( 433, 35318, '2016-08-05 15:00:00.000', 146626, 'GDF Suez') --Added per your latest request
            ,( 433, 35318, '2017-01-10 11:00:00.000', 146626, 'GDF Suez') --Added per your latest request
    ;

    Ok... so why do we need the dbo.fnTallyProg function?  According to your original post, you said...

    I need a very efficient way to duplicate rows based on dates in rows.


    ... and other than using a physical Tally table, there is no way faster and it's quite efficient in that it uses no reads no matter how many numbers it needs to produce.  It also makes problems like this a breeze because now you can just call the functionality using a CROSS APPLY instead of having to rebuild it in every place that you might need it now and in the future.

    And now, borrowing heavily from the good code that drew.allen previously posted, here's a very fast, very efficient solution that's also easy to write, troubleshoot, and implement.  I've also instituted several best practices for documentation, readability, and the usage of aliases.


       WITH cteMissingHours AS
    (--==== Determine the missing hours between the current row and the temporally next row
         -- according to the PubSourceID as a grouping.
     SELECT *
            ,MissingHours = DATEDIFF(hh, startdate, LEAD(startdate, 1, DATEADD(hh, 1, startdate))
                            OVER (PARTITION BY PubSourceID ORDER BY startdate)) - 1
       FROM #OutputResultSet
      --ORDER BY Pubsourceid, startdate
    )--==== Using the fnTallyProg function as a row source and source of integers, use it for calculated,
         -- constrained relational multiplication to produce the desired result.
     SELECT  mh.value
            ,mh.dataitemuniverseid
            ,StartDT = DATEADD(hh, t.N, mh.startdate)
            ,mh.Pubsourceid
            ,mh.PubsourceName
       FROM cteMissingHours mh
      CROSS APPLY dbo.fnTallyProg(0,mh.MissingHours,1) t
      ORDER BY mh.Pubsourceid, StartDT
    ;

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I suppose I throw in a solution myself...
    CREATE TABLE #OutputResultSet (
      value INT,
      dataitemuniverseid INT,
      startdate DATETIME,
      Pubsourceid VARCHAR(50),
      PubsourceName VARCHAR(50)
      );
    INSERT #OutputResultSet (value, dataitemuniverseid, startdate, Pubsourceid, PubsourceName) VALUES
        (0, 32833, '2017-08-07 11:00:00.000', 147942, 'EL'),
        (0, 32835, '2017-08-08 11:00:00.000', 147942, 'EL'),
        (230, 32835, '2017-08-11 14:00:00.000', 147942, 'EL'),
        (230, 32833, '2017-08-12 14:00:00.000', 147942, 'EL'),
        (158, 32832, '2016-08-01 23:00:00.000', 280159, 'ENT'),
        (0, 32832, '2016-08-03 11:00:00.000', 280159, 'ENT'),
        (158, 32832, '2016-08-05 15:00:00.000', 280159, 'ENT'),
        (0, 32830, '2016-08-07 23:00:00.000', 280159, 'ENT'),
        (158, 32830, '2016-08-09 23:00:00.000', 280159, 'ENT');

    -- Add a "POC" index to get rid of the sort caused by the use of LEAD function.
    CREATE CLUSTERED INDEX cix_OutputResultSet ON #OutputResultSet (Pubsourceid, startdate);
    --================================================================

    WITH
        cte_DateGap AS (
            SELECT
                ors.value, ors.dataitemuniverseid, ors.startdate, ors.Pubsourceid, ors.PubsourceName,
                DateGap = DATEDIFF(DAY, ors.startdate, LEAD(ors.startdate, 1, ors.startdate) OVER (PARTITION BY ors.Pubsourceid ORDER BY ors.startdate))
            FROM
                #OutputResultSet ors
            )
    SELECT
        dg.value,
        dg.dataitemuniverseid,
        startdate = DATEADD(dd, t.n, dg.startdate),
        dg.Pubsourceid,
        dg.PubsourceName
    FROM
        cte_DateGap dg
        CROSS APPLY dbo.tfn_Tally(ISNULL(NULLIF(dg.DateGap, 0), 1), 0) t;

  • Jason A. Long - Thursday, August 31, 2017 6:40 PM

    ekant_alone - Thursday, August 31, 2017 4:07 PM

    yes i am allowed to. But i am confused how i can help my case using IVTF here. Can you please help me. I am super stuck.  thank you

    I can't speak for Jeff, but I imagine that he was going to offer you his tally function code.

    In the meantime, here's the one I use... I imagine they are both pretty similar...
    CREATE FUNCTION dbo.tfn_Tally
    /* ============================================================================
    07/20/2017 JL, Created. Capable of creating a sequense of rows
                    ranging from -10,000,000,000,000,000 to 10,000,000,000,000,000
    ============================================================================ */
    (
        @NumOfRows BIGINT,
        @StartWith BIGINT
    )
    RETURNS TABLE WITH SCHEMABINDING AS
    RETURN
        WITH
            cte_n1 (n) AS (SELECT 1 FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) n (n)),    -- 10 rows
            cte_n2 (n) AS (SELECT 1 FROM cte_n1 a CROSS JOIN cte_n1 b),                                -- 100 rows
            cte_n3 (n) AS (SELECT 1 FROM cte_n2 a CROSS JOIN cte_n2 b),                                -- 10,000 rows
            cte_n4 (n) AS (SELECT 1 FROM cte_n3 a CROSS JOIN cte_n3 b),                                -- 100,000,000 rows
            cte_Tally (n) AS (
                SELECT TOP (@NumOfRows)
                    (ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1) + @StartWith
                FROM
                    cte_n4 a CROSS JOIN cte_n4 b                                                    -- 10,000,000,000,000,000 rows
                )
        SELECT
            t.n
        FROM
            cte_Tally t;
    GO

    Heh... I've said it before and I'll say it again, great minds think alike, Jason.  I incorporated it into the latest request both for hours and the extended amount of time between the two rows he recently talked about and added Drew's good test harness so that it's all in one place now.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Thursday, August 31, 2017 8:33 PM

    Heh... I've said it before and I'll say it again, great minds think alike, Jason.  I did take the time to incorporate it into the latest request both for hours and the extended amount of time between the two rows he recently talked about and added Drew's good test harness so that it's all in one place now.

    Well, coming from you, that is high praise indeed! Fact is, I only wish my brain worked like yours... If our code patters have similarities, it's only because I've been reading your stuff for years now.

    If you have a chance, take a look at the attached (actual) execution plan. It's 4 executions... Mine then yours w/o an index... add an index... then mine and yours again with the index.
    Looking at both sets of code, I'm not seeing why mine is being estimated so much lower. I looked at statics IO & time but the test harness is too small to deduce anything. Both have the same scan and logical read count and both finish in 0 MS with a warm cache.
    Come to think of it, I should have thrown Drew's in there too... :crying:
    Anyway, hopefully you'll see something I'm not...

  • Brilliant! Wow!! wowwww!   Jason, Steve, Drew, and Jeff - THANK YOU ALL........   I have learnt many things and will always  keep this 'Swiss knife ' in my 'SQL arsenal ' .. Jeff, never knew i could use CTEs this way. i have set up an alert on google for all your posts now. i am sure i will learn many things from your code going forward. 
    cant THANK YOU all enough !

  • ekant_alone - Friday, September 1, 2017 7:52 AM

    Brilliant! Wow!! wowwww!   Jason, Steve, Drew, and Jeff - THANK YOU ALL........   I have learnt many things and will always  keep this 'Swiss knife ' in my 'SQL arsenal ' .. Jeff, never knew i could use CTEs this way. i have set up an alert on google for all your posts now. i am sure i will learn many things from your code going forward. 
    cant THANK YOU all enough !

    I wish I could take full credit for it but the original idea of cCTEs (Cascading CTEs and should not be confused with rCTEs or Recursive CTEs, which are horribly slow for this type of thing) to build a sequence of numbers isn't mine.  The first time I saw such a thing was by a very well-known and thoughtful SQL Ninja by the name of Itzik Ben-Gan.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • drew.allen - Thursday, August 31, 2017 10:02 AM

    First, sample data should be provided as an INSERT statement into your sample table.

    The solution below uses a tally table.  There is a blazing fast inline table-valued function that takes a value and produces that many rows, but I haven't included that here, because not everyone can create UDFs.  Instead, I'm using a table variable for the tally table.

    Tally table setup.
    DECLARE @Tally TABLE( n INT)

    /* This uses a zero-based Tally table instead of the typical one-based tally. */
    INSERT @Tally(n)
    VALUES (0), (1), (2), (3), (4), (5)

    Sample data (the correct way)
    CREATE TABLE #OutputResultSet 

    value INT
    ,dataitemuniverseid INT
    ,startdate DATETIME 
    ,Pubsourceid VARCHAR(50)
    ,PubsourceName Varchar(50)

    INSERT #OutputResultSet(value, dataitemuniverseid, startdate, Pubsourceid, PubsourceName)
    VALUES
    (0, 32833, '2017-08-07 11:00:00.000', 147942, 'EL'),
    (0, 32835, '2017-08-08 11:00:00.000', 147942, 'EL'),
    (230, 32835, '2017-08-11 14:00:00.000', 147942, 'EL'),
    (230, 32833, '2017-08-12 14:00:00.000', 147942, 'EL'),
    (158, 32832, '2016-08-01 23:00:00.000', 280159, 'ENT'),
    (0, 32832, '2016-08-03 11:00:00.000', 280159, 'ENT'),
    (158, 32832, '2016-08-05 15:00:00.000', 280159, 'ENT'),
    (0, 32830, '2016-08-07 23:00:00.000', 280159, 'ENT'),
    (158, 32830, '2016-08-09 23:00:00.000', 280159, 'ENT')

    The solution
    ;
    WITH PubDateRanges AS
    (
        SELECT *, DATEDIFF(DAY, startdate, LEAD(startdate, 1, DATEADD(DAY, 1, startdate)) OVER(PARTITION BY PubSourceID ORDER BY startdate)) - 1 as missingdays
        FROM #OutputResultSet
    )
    SELECT value, dataitemuniverseid, DATEADD(DAY, n, startdate) AS startdate, Pubsourceid, PubsourceName
    FROM PubDateRanges
    INNER JOIN @Tally
        ON missingdays >= n
    ORDER BY Pubsourceid, startdate

    Drew

    p.s. Thank you for this. I was stuck on 2005 and 2008 for about a million years and haven't had the opportunity for much work for Lead and Lag since we finally got up to 2012 in 2016. Your code is a really good example of how simple it can be.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jason A. Long - Thursday, August 31, 2017 9:36 PM

    Jeff Moden - Thursday, August 31, 2017 8:33 PM

    Heh... I've said it before and I'll say it again, great minds think alike, Jason.  I did take the time to incorporate it into the latest request both for hours and the extended amount of time between the two rows he recently talked about and added Drew's good test harness so that it's all in one place now.

    Well, coming from you, that is high praise indeed! Fact is, I only wish my brain worked like yours... If our code patters have similarities, it's only because I've been reading your stuff for years now.

    If you have a chance, take a look at the attached (actual) execution plan. It's 4 executions... Mine then yours w/o an index... add an index... then mine and yours again with the index.
    Looking at both sets of code, I'm not seeing why mine is being estimated so much lower. I looked at statics IO & time but the test harness is too small to deduce anything. Both have the same scan and logical read count and both finish in 0 MS with a warm cache.
    Come to think of it, I should have thrown Drew's in there too... :crying:
    Anyway, hopefully you'll see something I'm not...

    I've not done a deep dive but I think the difference in estimates is that your function resolves to a maximum of 4 CROSS JOINs and mine resolves to 16.  SQL Server isn't able to estimate the parameter driven short circuits and so has to consider all of the  CROSS JOINs.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Friday, September 1, 2017 8:19 AM

    I've not done a deep dive but I think the difference in estimates is that your function resolves to a maximum of 4 CROSS JOINs and mine resolves to 16.  SQL Server isn't able to estimate the parameter driven short circuits and so has to consider all of the  CROSS JOINs.

    Thank you for looking Jeff. That may very well be the case. I'll try looking at it again with fresh eyes.

  • Jeff Moden - Friday, September 1, 2017 8:15 AM

    drew.allen - Thursday, August 31, 2017 10:02 AM

    First, sample data should be provided as an INSERT statement into your sample table.

    The solution below uses a tally table.  There is a blazing fast inline table-valued function that takes a value and produces that many rows, but I haven't included that here, because not everyone can create UDFs.  Instead, I'm using a table variable for the tally table.

    Tally table setup.
    DECLARE @Tally TABLE( n INT)

    /* This uses a zero-based Tally table instead of the typical one-based tally. */
    INSERT @Tally(n)
    VALUES (0), (1), (2), (3), (4), (5)

    Sample data (the correct way)
    CREATE TABLE #OutputResultSet 

    value INT
    ,dataitemuniverseid INT
    ,startdate DATETIME 
    ,Pubsourceid VARCHAR(50)
    ,PubsourceName Varchar(50)

    INSERT #OutputResultSet(value, dataitemuniverseid, startdate, Pubsourceid, PubsourceName)
    VALUES
    (0, 32833, '2017-08-07 11:00:00.000', 147942, 'EL'),
    (0, 32835, '2017-08-08 11:00:00.000', 147942, 'EL'),
    (230, 32835, '2017-08-11 14:00:00.000', 147942, 'EL'),
    (230, 32833, '2017-08-12 14:00:00.000', 147942, 'EL'),
    (158, 32832, '2016-08-01 23:00:00.000', 280159, 'ENT'),
    (0, 32832, '2016-08-03 11:00:00.000', 280159, 'ENT'),
    (158, 32832, '2016-08-05 15:00:00.000', 280159, 'ENT'),
    (0, 32830, '2016-08-07 23:00:00.000', 280159, 'ENT'),
    (158, 32830, '2016-08-09 23:00:00.000', 280159, 'ENT')

    The solution
    ;
    WITH PubDateRanges AS
    (
        SELECT *, DATEDIFF(DAY, startdate, LEAD(startdate, 1, DATEADD(DAY, 1, startdate)) OVER(PARTITION BY PubSourceID ORDER BY startdate)) - 1 as missingdays
        FROM #OutputResultSet
    )
    SELECT value, dataitemuniverseid, DATEADD(DAY, n, startdate) AS startdate, Pubsourceid, PubsourceName
    FROM PubDateRanges
    INNER JOIN @Tally
        ON missingdays >= n
    ORDER BY Pubsourceid, startdate

    Drew

    p.s. Thank you for this. I was stuck on 2005 and 2008 for about a million years and haven't had the opportunity for much work for Lead and Lag since we finally got up to 2012 in 2016. Your code is a really good example of how simple it can be.

    You're welcome.  Again, the inspiration came from Itzik Ben-Gan and his work on islands and gaps and packing intervals.

    Drew

    PS: The blazing fast UDF that I referred to in my original post is the one written by Itzik Ben-Gan.

    J. Drew Allen
    Business Intelligence Analyst
    Philadelphia, PA

  • ekant_alone - Friday, September 1, 2017 7:52 AM

    Brilliant! Wow!! wowwww!   Jason, Steve, Drew, and Jeff - THANK YOU ALL........   I have learnt many things and will always  keep this 'Swiss knife ' in my 'SQL arsenal ' .. Jeff, never knew i could use CTEs this way. i have set up an alert on google for all your posts now. i am sure i will learn many things from your code going forward. 
    cant THANK YOU all enough !

    p.s.  I'm both honored and humbled that you think well enough of the solution I provided to setup to follow me but I'm not the only one that knows things.  For example, I borrowed code that Drew posted to very quickly find the "next row".  Jason is obviously skilled in the use of the "Tally Function" and, like so many other heavy hitters on this forum, built one of his own.  Steve flushed out an important problem in the original data/results posted.

    The community of people on this forum, heavy hitter or not, is just extraordinary and there's a wealth of real life problems and some incredibly innovative solutions to be had.  Rather than following just one lowly DBA/Developer, follow the crowd.  Check out the forum posts every day.  You'll be amazed at the quantity and quality of the solutions that others come up with not to mention frequent performance comparisons as well as some incredible discussions in the areas of what, why, and how.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 12 posts - 16 through 26 (of 26 total)

You must be logged in to reply to this topic. Login to reply