Calculate the Running Total for the last five Transactions

  • Comments posted to this topic are about the item Calculate the Running Total for the last five Transactions

    --Divya

  • Good article, but let's take another approach and see what we see.

    Taking your sample data from your article, and using a slightly different tact, here is another way to tackle your problem given in your article. I have included your sample data, table (with a slight modification for my code), and code as well as my code.

    set nocount on

    go

    CREATE TABLE dbo.Accounts

    (

    ID int IDENTITY(1,1) primary key, -- Primary Key defaults to a clustered index, needed for this to work

    TransactionDate datetime,

    Balance money,

    RunningBalance money null

    );

    go

    insert into Accounts(TransactionDate,Balance) values ('1/1/2000',100);

    insert into Accounts(TransactionDate,Balance) values ('1/2/2000',101);

    insert into Accounts(TransactionDate,Balance) values ('1/3/2000',102);

    insert into Accounts(TransactionDate,Balance) values ('1/4/2000',103);

    insert into Accounts(TransactionDate,Balance) values ('1/5/2000',104);

    insert into Accounts(TransactionDate,Balance) values ('1/6/2000',105);

    insert into Accounts(TransactionDate,Balance) values ('1/7/2000',106);

    insert into Accounts(TransactionDate,Balance) values ('1/8/2000',107);

    insert into Accounts(TransactionDate,Balance) values ('1/9/2000',108);

    insert into Accounts(TransactionDate,Balance) values ('1/10/2000',109);

    insert into Accounts(TransactionDate,Balance) values ('1/11/2000',200);

    insert into Accounts(TransactionDate,Balance) values ('1/12/2000',201);

    insert into Accounts(TransactionDate,Balance) values ('1/13/2000',202);

    insert into Accounts(TransactionDate,Balance) values ('1/14/2000',203);

    insert into Accounts(TransactionDate,Balance) values ('1/15/2000',204);

    insert into Accounts(TransactionDate,Balance) values ('1/16/2000',205);

    insert into Accounts(TransactionDate,Balance) values ('1/17/2000',206);

    insert into Accounts(TransactionDate,Balance) values ('1/18/2000',207);

    insert into Accounts(TransactionDate,Balance) values ('1/19/2000',208);

    insert into Accounts(TransactionDate,Balance) values ('1/20/2000',209);

    go

    select * from dbo.Accounts;

    go

    print '-- Cross Join Query --';

    set statistics io on

    SELECT Acc.ID,CONVERT(varchar(50),TransactionDate,101) AS TransactionDate

    , Balance, isnull(RunningTotal,'') AS RunningTotal

    FROM Accounts Acc

    LEFT OUTER JOIN (SELECT ID,sum(Balance) AS RunningTotal

    FROM (SELECT A.ID AS ID,B.ID AS BID, B.Balance

    FROM Accounts A

    cross JOIN Accounts B

    WHERE B.ID BETWEEN A.ID-4

    AND A.ID AND A.ID>4)T

    GROUP BY ID ) Bal

    ON Acc.ID=Bal.ID;

    set statistics io off

    print '-- Cross Join Query --';

    go

    print '-- Update Query --';

    set statistics io on

    declare @var1 money,

    @var2 money,

    @var3 money,

    @var4 money,

    @var5 money;

    update dbo.Accounts set

    @var5 = @var4,

    @var4 = @var3,

    @var3 = @var2,

    @var2 = @var1,

    @var1 = Balance,

    RunningBalance = isnull(@var1 + @var2 + @var3 + @var4 + @var5, 0);

    set statistics io off

    print '-- Update Join Query --';

    go

    print '-- Select After Update Query --';

    set statistics io on

    select * from dbo.Accounts;

    set statistics io off

    print '-- Select After Update Query --';

    go

    drop table dbo.Accounts;

    go

    set nocount off

    go

    My results match yours in the article, but what I also wanted to include here were the stats I also had collected using statistic io on (included in the above code).

    -- Cross Join Query --

    Table 'Accounts'. Scan count 17, logical reads 74, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    -- Cross Join Query --

    -- Update Query --

    Table 'Accounts'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    -- Update Join Query --

    -- Select After Update Query --

    Table 'Accounts'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    -- Select After Update Query --

    I'll leave further assessment of the different approaches to others. For me, I just wanted to see what might be different.

  • @Lynn:

    Gold 🙂

    Regards,

    Jacob

  • Brilliant Divya and Lynn

    "Keep Trying"

  • There is a huge problem with this technique. IDENTITY values are not guaranteed to be consecutive; for a variety of reasons there may be gaps. Those will invalidate the results from this query.

    On SQL Server 2005 and above, using ROW_NUMBER() is the adviced technique. On SQL Server 2000 and below - well, let's just say that the required query will be ugly... 🙁


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Hugo, are you talking about the quirky update for 2000?

  • Ninja's_RGR'us (11/27/2008)


    Hugo, are you talking about the quirky update for 2000?

    Hi Ninja's_RGR'us,

    No. As far as I know, UPDATE is not quirky at all in SQL Server 2000. What do you mean by "quirky update"?

    What I was refering too is the lack of ROW_NUMBER() in SQL Server 2000. This means you'll either have to take your chance with IDENTITY, at the risk of gaps, as the author of this article did; or you have to use a correlated subquery to calculate the row number on the fly, which can result in dramatic performance as the amount of rows grows. Plus, the queries tend to get long and hard to understand.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • I meant this by quirky update :

    Update table set @Var = UpdatedColumn = @Var + WhateverIttakesToEvalutateThisVariable

    This must be used with a index hint on a temp table to be sure that nothing goes wrong...

    That's a way to make a running total... the case statement here would be quite interesting though ;-P.

  • Ninja's_RGR'us (11/27/2008)


    I meant this by quirky update :

    Update table set @Var = UpdatedColumn = @Var + WhateverIttakesToEvalutateThisVariable

    This must be used with a index hint on a temp table to be sure that nothing goes wrong...

    That's a way to make a running total... the case statement here would be quite interesting though ;-P.

    Oohh, that one.

    You know that this "trick" is neither documented nor supported, do you?

    (Clarification: "UPDATE ... SET @var = column = expression" is documented and supported, but the effects of the same variable in the expression are not - and given the official intent of an UPDATE statement, the results people currently get in 99% of all cases could be considered a bug).


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Yup that's the one!

    Thanks for the clarifications!

  • Is no one else going to point out that the results of this process are flawed: the first four values are given as zero, which is just plain wrong! (One could argue that in some cases they should return NULL, but never zero.)

    Throw away your pocket calculators; visit www.calcResult.com
  • Very good point pointed out by Mike. It will be never zero.

  • I've had weirder requests than this... sometimes the user wants the data if and only if a certain amount of days has been taken into account.

    But I agree... that looks funny to me too!

  • I'll address a couple of concerns from my point of view.

    1. Based on the article, the first 4 values should be zero. This is why I put an isnull in my code. If you take that out, the first four values will be null.

    2. The identity column. Again, based on the article, the sole purpose of the identity column is to provide an order to the values inserted into the table. By making this column a primary key, I also made this a clustered index allowing it to force the order specified. By Jeff's way of doing this, I also should have put an index hint on the update query as well, but for this demonstration it worked well without it. Since the purpose of the identity field was order, if there was a gap it wouldn't have caused an issue.

    Some of the other things that have been brought up are actually extensions of this article, and I challenge those individuals to pursue those and show us what you find, either as a post here, or write another article expanding on what the author started.

    This is how we learn new tricks and ways to accomplish a variety of tasks.

    I reran my queries, once with the initial data set, and a second time with a larger data set. Below are the row counts, statistics io, ans statistics time from the runs.

    row count: 20

    -- Cross Join Query --

    Table 'Accounts'. Scan count 17, logical reads 74, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 2 ms.

    -- Cross Join Query --

    -- Update Query --

    Table 'Accounts'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 44 ms.

    -- Update Join Query --

    -- Select After Update Query --

    Table 'Accounts'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 1 ms.

    -- Select After Update Query --

    Row Count: 3280

    -- Cross Join Query --

    Table 'Accounts'. Scan count 3278, logical reads 6663, physical reads 0, read-ahead reads 49, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 94 ms, elapsed time = 246 ms.

    -- Cross Join Query --

    -- Update Query --

    Table 'Accounts'. Scan count 1, logical reads 18, physical reads 0, read-ahead reads 49, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 31 ms, elapsed time = 35 ms.

    -- Update Join Query --

    -- Select After Update Query --

    Table 'Accounts'. Scan count 1, logical reads 18, physical reads 0, read-ahead reads 38, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 201 ms.

    -- Select After Update Query --

  • Hugo Kornelis (11/27/2008)


    What I was refering too is the lack of ROW_NUMBER() in SQL Server 2000. This means you'll either have to take your chance with IDENTITY, at the risk of gaps, as the author of this article did; or you have to use a correlated subquery to calculate the row number on the fly, which can result in dramatic performance as the amount of rows grows. Plus, the queries tend to get long and hard to understand.

    Nope... in SQL Server 2000, just do a SELECT INTO a temp table with the IDENTITY function and the author's code works just fine without any difficulty for length or understanding.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 57 total)

You must be logged in to reply to this topic. Login to reply