Solving the "Running Total" & "Ordinal Rank" Problems in SS 2k/2k5

  • hi jeff,

    intresting article to read.good test code

    thx

    sreejith

    MCAD

  • One more 'out-of-the-box' attempt for this topic: could an XML/XQuery guru out there try to tackle the running total problem from that perspective, if appropriate? I have seen some examples of XML-based solutions of other problems (parsing was one IIRC) that were quite surprising and I don't know enough about the XML world to know if RTs can be solved efficiently using an XML-based approach. TIA!!

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • toniupstny (2/4/2008)


    Hi Jeff. I was challenged to explore using your method(s) as a less resource-intensive approach to the subquery method of getting the TOP n candidates from a set.

    It was an interesting exercise and I would appreciate your comments if possible. The time to create the temporary table, create indexes and the time/resources for the inserts then updates added a select of the specific candidates (highest two of each group) did not come out to be as vastly-different as I thought it might from the time/resources from the TOP 2 subquery approach if I created a non-clustered index first then did the TOP 2 query.

    CREATE NONCLUSTERED INDEX IX_JBMTest_AccountID_Amount

    ON dbo.JBMTest (AccountID,Amount desc)

    go

    select j1.accountid, j1.amount

    from jbmtest j1 WITH (INDEX(IX_JBMTest_AccountID_Amount),TABLOCKX)

    where j1.amount in

    (select top 2 j2.amount from jbmtest j2

    WITH (INDEX(IX_JBMTest_AccountID_Amount),TABLOCKX)

    where j1.accountid = j2.accountid

    order by j2.accountid, j2.amount desc)

    order by j1.accountid, j1.amount desc

    drop index jbmtest.ix_jbmtest_accountid_amount

    In my unscientific approach, running the TOP 2 query/subquery without the nonclustered index was about 1200 units of cost. With creating the non-clustered index as above the total cost (index creation and select) as reduced to about 15 units.

    This turned out to be less than the time/resources needed for creating the temp table, indexes and updates even prior to doing the select for results. It seemed the time to do the Updates alone was around 30 units of cost.

    My thought with using a non-clustered index was that even if there were already a clustered index out on the current table, you could add a non-clustered one, run the TOP n select then DROP the non-clustered index. In that way you wouldn't have much effect on any other running queries.

    It looks like the key here would be the index tailored to how you need to extract the top N rows by group as the sort was the major component in the TOP n query. The Update statements and temporary table would be redundant. Am I missing something?

    Running totals and counts would be a much different situation.

    Toni

    Like anything else, it depends, Toni... and great exercise... thank you for the post.

    The correlated subquery method runs in about 16 seconds on my box including the index build. It also does a pot wad of disk IO to the tune of over 3 million logical reads and almost a million scans.

    [font="Courier New"]SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server parse and compile time:

    CPU time = 156 ms, elapsed time = 163 ms.

    Table 'JBMTest'. Scan count 1, logical reads 7408, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 5235 ms, elapsed time = 5350 ms.

    SQL Server Execution Times:

    CPU time = 5235 ms, elapsed time = 5351 ms.

    SQL Server parse and compile time:

    CPU time = 234 ms, elapsed time = 260 ms.

    (100022 row(s) affected)

    Table 'JBMTest'. Scan count 999502, logical reads 3007608, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 9234 ms, elapsed time = 10736 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 6 ms.[/font]

    The grouped count method of doing the same thing copied to a temp table takes about 14 seconds and isn't nearly so tough on the IO...

    [font="Courier New"]SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    Table '#MyHead_____________________________________________________________________________________________________________000000000166'. Scan count 0, logical reads 1, physical reads 0, read-ahead reads 0.

    Table 'JBMTest'. Scan count 1, logical reads 11161, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 3234 ms, elapsed time = 3238 ms.

    SQL Server parse and compile time:

    CPU time = 1 ms, elapsed time = 1 ms.

    Table '#MyHead_____________________________________________________________________________________________________________000000000166'. Scan count 1, logical reads 4083, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 2390 ms, elapsed time = 2398 ms.

    SQL Server Execution Times:

    CPU time = 2390 ms, elapsed time = 2399 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    Table '#MyHead_____________________________________________________________________________________________________________000000000166'. Scan count 1, logical reads 4084, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 5047 ms, elapsed time = 5168 ms.

    Table '#MyHead_____________________________________________________________________________________________________________000000000166'. Scan count 1, logical reads 4084, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 360 ms, elapsed time = 2326 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 8 ms.

    [/font]

    The interesting part is that the correlated sub-query method comes up with more than 100,000 rows (100,022 during my run) because of "ties" on the amounts. That's where the "it depends" part comes into play... if that's not what you wanted, the correlated subquery can't be made to do it. If it is what you wanted, the grouped running count can still be made to do it if you take the amount into consideration.

    Here's the code I used for the temp table solution...

    SET STATISTICS IO ON

    SET STATISTICS TIME ON

    SELECT AccountID, Amount, Date, CAST(0 AS INT) AS GrpCnt

    INTO #MyHead

    FROM dbo.JBMTest

    CREATE CLUSTERED INDEX IX_JBMTest_AccountID_Amount --clustered to resolve "Merry-go-Round"

    ON dbo.#MyHead (AccountID, Amount DESC)

    GO

    --===== Declare the variables for the "Code Basis"

    DECLARE @PrevRunBal MONEY --Overall running total

    SET @PrevRunBal = 0

    DECLARE @PrevGrpBal MONEY --Running total resets when account changes

    SET @PrevGrpBal = 0

    DECLARE @PrevRunCnt INT --Overall running count (ordinal rank)

    SET @PrevRunCnt = 0

    DECLARE @PrevGrpCnt INT --Running count resets when account changes

    SET @PrevGrpCnt = 0

    DECLARE @PrevAcctID INT --The "anchor" and "account change detector"

    SET @PrevAcctID = 0

    --===== Solve the grouped running count (rank) in a temp table

    UPDATE dbo.#MyHead

    SET @PrevGrpCnt = GrpCnt = CASE

    WHEN AccountID = @PrevAcctID

    THEN @PrevGrpCnt + 1

    ELSE 1 -- Restarts count at "1"

    END,

    --===== "Anchor" and provides for "account change detection"

    @PrevAcctID = AccountID

    FROM dbo.#MyHead WITH (INDEX(IX_JBMTest_AccountID_Amount),TABLOCKX)

    --===== Display the results in the correct order

    SELECT *

    FROM dbo.#MyHead

    WHERE GrpCnt <=2

    ORDER BY AccountID, Date

    DROP TABLE #MyHead

    Overall, I think the temp table/update shows to be less resource intensive than the correlated subquery. Of course, if you have a table with the columns already allocated, the update wins hands down for resource usage. Haven't tried the "Rank" method in 2k5, yet, for this type of thing.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • sreejithsql@gmail.com (2/4/2008)


    hi jeff,

    intresting article to read.good test code

    thx

    sreejith

    MCAD

    Thanks for the compliment and the feedback, Sreejith.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • TheSQLGuru (2/4/2008)


    One more 'out-of-the-box' attempt for this topic: could an XML/XQuery guru out there try to tackle the running total problem from that perspective, if appropriate? I have seen some examples of XML-based solutions of other problems (parsing was one IIRC) that were quite surprising and I don't know enough about the XML world to know if RTs can be solved efficiently using an XML-based approach. TIA!!

    Interesting idea...

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Here's 'GrpBal' using XML

    WITH CTE AS (

    SELECT a.AccountID,

    (SELECT b.Date AS "@Date",

    b.Amount AS "@Amount"

    FROM JBMTest b

    WHERE b.AccountID=a.AccountID

    ORDER BY b.Date

    FOR XML PATH('Transaction'),ROOT('Transactions'),TYPE) AS X

    FROM (SELECT DISTINCT AccountID FROM JBMTest) a

    )

    SELECT AccountID,

    r.value('@Date','datetime') AS Date,

    r.value('@Amount','float') AS Amount,

    r.value('@Amount','float') +

    r.value('sum(for $a in . return $a/../*[. << $a]/@Amount)','float') AS GrpBal

    FROM CTE

    CROSS APPLY X.nodes('/Transactions/Transaction') AS x(r)

    ____________________________________________________

    Deja View - The strange feeling that somewhere, sometime you've optimised this query before

    How to get the best help on a forum

    http://www.sqlservercentral.com/articles/Best+Practices/61537
  • Hi Jeff. Thank you for the response. I ran the query using the one you had posted in your prior response.

    SET STATISTICS IO ON

    SET STATISTICS TIME ON

    SELECT AccountID, Amount, Date, CAST(0 AS INT) AS GrpCnt

    INTO #MyHead

    FROM dbo.JBMTest

    CREATE CLUSTERED INDEX IX_JBMTest_AccountID_Amount --clustered to resolve "Merry-go-Round"

    ON dbo.#MyHead (AccountID, Amount DESC)

    GO

    --===== Declare the variables for the "Code Basis"

    DECLARE @PrevRunBal MONEY --Overall running total

    SET @PrevRunBal = 0

    DECLARE @PrevGrpBal MONEY --Running total resets when account changes

    SET @PrevGrpBal = 0

    DECLARE @PrevRunCnt INT --Overall running count (ordinal rank)

    SET @PrevRunCnt = 0

    DECLARE @PrevGrpCnt INT --Running count resets when account changes

    SET @PrevGrpCnt = 0

    DECLARE @PrevAcctID INT --The "anchor" and "account change detector"

    SET @PrevAcctID = 0

    --===== Solve the grouped running count (rank) in a temp table

    UPDATE dbo.#MyHead

    SET --===== "Anchor" and provides for "account change detection"

    @PrevGrpCnt = GrpCnt = CASE

    WHEN AccountID = @PrevAcctID

    THEN @PrevGrpCnt + 1

    ELSE 1 -- Restarts count at "1"

    END,

    @PrevAcctID = AccountID

    FROM dbo.#MyHead WITH (INDEX(IX_JBMTest_AccountID_Amount),TABLOCKX)

    --===== Display the results in the correct order

    SELECT *

    FROM dbo.#MyHead

    WHERE GrpCnt <=2

    ORDER BY AccountID, Date

    DROP TABLE #MyHead

    and had the following IO and Time statistics:

    /*-----------------------------

    SET STATISTICS IO ON

    SET STATISTICS TIME ON

    SELECT AccountID, Amount, Date, CAST(0 AS INT) AS GrpCnt

    -----------------------------*/

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    (100000 row(s) affected)

    Table '#MyHead_____________________________________________________________________________________________________________000000000059'. Scan count 0, logical reads 1, physical reads 0, read-ahead reads 0.

    Table 'JBMTest'. Scan count 1, logical reads 705, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 110 ms, elapsed time = 121 ms.

    SQL Server Execution Times:

    CPU time = 110 ms, elapsed time = 121 ms.

    SQL Server Execution Times:

    CPU time = 110 ms, elapsed time = 127 ms.

    SQL Server parse and compile time:

    CPU time = 109 ms, elapsed time = 171 ms.

    Table '#MyHead_____________________________________________________________________________________________________________000000000059'. Scan count 2, logical reads 820, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 173 ms, elapsed time = 115 ms.

    SQL Server Execution Times:

    CPU time = 173 ms, elapsed time = 118 ms.

    (4 row(s) affected)

    SQL Server Execution Times:

    CPU time = 173 ms, elapsed time = 120 ms.

    SQL Server Execution Times:

    CPU time = 173 ms, elapsed time = 120 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    (100000 row(s) affected)

    Table '#MyHead_____________________________________________________________________________________________________________000000000059'. Scan count 1, logical reads 411, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 359 ms, elapsed time = 425 ms.

    SQL Server Execution Times:

    CPU time = 359 ms, elapsed time = 431 ms.

    SQL Server Execution Times:

    CPU time = 359 ms, elapsed time = 431 ms.

    (72978 row(s) affected)

    Table '#MyHead_____________________________________________________________________________________________________________000000000059'. Scan count 2, logical reads 412, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 189 ms, elapsed time = 652 ms.

    SQL Server Execution Times:

    CPU time = 189 ms, elapsed time = 653 ms.

    SQL Server Execution Times:

    CPU time = 189 ms, elapsed time = 653 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    +++++++++++++++++++++++++++++

    CPU Elapsed Logical Reads

    1

    705

    110121

    110121

    110127

    109171

    820

    173115

    173118

    173120

    173120

    411

    359425

    359431

    359431

    412

    189652

    189653

    189653

    27754258 2349Totals for Update Method

    Then I ran the original subquery method:

    set statistics io on

    set statistics time on

    CREATE NONCLUSTERED INDEX IX_JBMTest_AccountID_Amount

    ON dbo.JBMTest (AccountID,Amount desc)

    go

    select j1.accountid, j1.amount

    from jbmtest j1 WITH (INDEX(IX_JBMTest_AccountID_Amount),TABLOCKX)

    where j1.amount in

    (select top 2 j2.amount from jbmtest j2

    WITH (INDEX(IX_JBMTest_AccountID_Amount),TABLOCKX)

    where j1.accountid = j2.accountid

    order by j2.accountid, j2.amount desc)

    order by j1.accountid, j1.amount desc

    drop index jbmtest.ix_jbmtest_accountid_amount

    and had the following IO and Time statistics:

    /*-----------------------------

    set statistics io on

    set statistics time on

    CREATE NONCLUSTERED INDEX IX_JBMTest_AccountID_Amount

    -----------------------------*/

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    Table 'JBMTest'. Scan count 2, logical reads 1410, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 156 ms, elapsed time = 131 ms.

    SQL Server Execution Times:

    CPU time = 156 ms, elapsed time = 132 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 3 ms.

    (72983 row(s) affected)

    Table 'JBMTest'. Scan count 99993, logical reads 300799, physical reads 0, read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 640 ms, elapsed time = 2459 ms.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    ++++++++++++++++++++++++++++++++++++++

    CPU Elapsed Logical Reads

    1

    1410

    156131

    156132

    0 3

    300799

    6402459

    9522726 302209 Total for Correlated subquery

    I don't really understand why the resource usage statistics are so different from the ones you received. Physical reads were 0 in both runs. I tried flushing the buffers with both CHECKPOINT and

    DBCC DROPCLEANBUFFERS as well as stopping and restarting the server instance through Service Manager but there were still no physical I/Os shown after running the batches.

    Is there another method of clearing the buffers I missed? Perhaps if the reads in both cases were done from scratch the results would be quite different? Maybe the relative CPU is the answer since my tests are running on my laptop and you likely have a large server?

    In any case... the answer remains it depends.

    As for being able to resolve for ties, distinct amounts, etc, it can be done via correlated subquery with some modifications.

    The original query returned just the TOP 2 (without ties).

    To get the TOP 2 distinct values:

    CREATE NONCLUSTERED INDEX IX_JBMTest_AccountID_Amount

    ON dbo.JBMTest (AccountID,Amount desc)

    go

    select distinct j1.accountid, j1.amount

    from jbmtest j1 WITH (INDEX(IX_JBMTest_AccountID_Amount),TABLOCKX)

    where j1.amount in

    (select j3.amount from

    (select distinct top 2 j2.amount, j2.accountid from jbmtest j2

    WITH (INDEX(IX_JBMTest_AccountID_Amount),TABLOCKX)

    where j1.accountid = j2.accountid

    order by j2.accountid, j2.amount desc) as j3)

    order by j1.accountid, j1.amount desc

    drop index jbmtest.ix_jbmtest_accountid_amount

    For the TOP 2 With Ties equivalent...

    CREATE NONCLUSTERED INDEX IX_JBMTest_AccountID_Amount

    ON dbo.JBMTest (AccountID,Amount desc)

    go

    select j1.accountid, j1.amount

    from jbmtest j1 WITH (INDEX(IX_JBMTest_AccountID_Amount),TABLOCKX)

    where j1.amount in

    (select j3.amount from

    (select distinct top 2 j2.amount, j2.accountid from jbmtest j2

    WITH (INDEX(IX_JBMTest_AccountID_Amount),TABLOCKX)

    where j1.accountid = j2.accountid

    order by j2.accountid, j2.amount desc) as j3)

    order by j1.accountid, j1.amount desc

    drop index jbmtest.ix_jbmtest_accountid_amount

    These variations did greatly increase the relative cost (double or more) of the query/subquery yet it does show the correlated subquery appraoch can resolve for ties in all variations if reworked a bit.

    In any case, this was an interesting exercise and I learned quite a bit. Thank you for the most interesting article!

    Toni

  • You bet, Toni. Thanks for all the feedback code you took the time to write and test. Everone learns something new when good folks, like yourself, do just that. I really appreciate it.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Mark (2/5/2008)


    Here's 'GrpBal' using XML

    WITH CTE AS (

    SELECT a.AccountID,

    (SELECT b.Date AS "@Date",

    b.Amount AS "@Amount"

    FROM JBMTest b

    WHERE b.AccountID=a.AccountID

    ORDER BY b.Date

    FOR XML PATH('Transaction'),ROOT('Transactions'),TYPE) AS X

    FROM (SELECT DISTINCT AccountID FROM JBMTest) a

    )

    SELECT AccountID,

    r.value('@Date','datetime') AS Date,

    r.value('@Amount','float') AS Amount,

    r.value('@Amount','float') +

    r.value('sum(for $a in . return $a/../*[. << $a]/@Amount)','float') AS GrpBal

    FROM CTE

    CROSS APPLY X.nodes('/Transactions/Transaction') AS x(r)

    Now this is a cool solution !!! :w00t::smooooth:

    It performs very well, but I'll have to do some more testing regarding

    trying to explain what it does and how it works :ermm:

    May take some time, because this week they want to make me a BI guy :blink:

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • It IS a cool solution... Wasn't so good in the performance department on my machine, though... it took 29 minutes to do the million row example on my machine... correlated subquery might do just as well, in this particular case. There're only 200 rows per account. CROSS APPLY gave it all the characteristics of a triangular join.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (2/5/2008)


    It IS a cool solution... Wasn't so good in the performance department on my machine, though... it took 29 minutes to do the million row example on my machine... correlated subquery might do just as well, in this particular case. There're only 200 rows per account. CROSS APPLY gave it all the characteristics of a triangular join.

    Remember - CROSS APPLY = Correlated subquery.... and if my Xquery is good enough to decipher that - it IS a triangular join being created...

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Andy is right. We have setup MSSQL2000 (4 CPU) with parallel query execution and your example/code does not work. I tried it 3 times but always with messy results.

    Example of result:

    RowNum AccountID Amount Date RunBal GrpBal RunCnt GrpCnt

    ----------- ----------- --------------------- ------------------------------------------------------ --------------------- --------------------- ----------- -----------

    451167 1 -38.3500 2000-06-23 09:25:07.967 -1430.6200 -72.5800 3920 15

    905251 1 -92.9700 2001-05-08 19:28:55.757 -1353.9100 4.1300 3909 4

    738143 1 -3.3700 2002-03-05 06:33:47.590 -1375.4400 -17.4000 3916 11

    5200 1 61.7700 2002-10-12 05:48:14.943 -1260.9400 97.1000 3908 3

    62482 1 42.2400 2003-02-16 08:19:53.653 -1392.2700 -34.2300 3919 14

    146296 1 67.6300 2003-05-07 08:18:23.690 -1286.2800 71.7600 3910 5

    With regards,

    Radovan Jablonovsky

  • Andy is right. We have setup MSSQL2000 (4 CPU) with parallel query execution and your example/code does not work. I tried it 3 times but always with messy results.

    Example of result:

    RowNum AccountID Amount Date RunBal GrpBal RunCnt GrpCnt

    ----------- ----------- --------------------- ------------------------------------------------------ --------------------- --------------------- ----------- -----------

    451167 1 -38.3500 2000-06-23 09:25:07.967 -1430.6200 -72.5800 3920 15

    905251 1 -92.9700 2001-05-08 19:28:55.757 -1353.9100 4.1300 3909 4

    738143 1 -3.3700 2002-03-05 06:33:47.590 -1375.4400 -17.4000 3916 11

    5200 1 61.7700 2002-10-12 05:48:14.943 -1260.9400 97.1000 3908 3

    62482 1 42.2400 2003-02-16 08:19:53.653 -1392.2700 -34.2300 3919 14

    146296 1 67.6300 2003-05-07 08:18:23.690 -1286.2800 71.7600 3910 5

    With regards,

    Radovan Jablonovsky

  • How in the world did you get parallel execution out of it? I've tried it on both a 4 CPU and an 8 CPU SQL 2k SP4 installation and can't get it to spawn parallelism.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hello,

    This article is getting unprecedented proportions.

    How could we have MS SQL engine development database guys participating in the discussion If they are not yet already doing so or watching from the sidelines?

    Very valuable and arguable content in here.:P

    BI Guy

Viewing 15 posts - 76 through 90 (of 250 total)

You must be logged in to reply to this topic. Login to reply