• I came up with a different plan from yours for "best solution":

    set nocount on

    go

    CREATE TABLE CaseDetails

    (CaseId INT IDENTITY(1,1) primary key,

    QTY NUMERIC(14,4),

    weightedQty NUMERIC(14,4),

    ShipmentId INT,

    Total as Qty + WeightedQty)

    go

    raiserror('First Insert', 10, 1) with nowait

    go

    insert into dbo.CaseDetails(qty, weightedqty, shipmentid)

    select 200, 300, 1

    from common.dbo.bignumbers

    where number between 1 and 15000

    go

    raiserror('Second Insert', 10, 1) with nowait

    go

    insert into dbo.CaseDetails(qty, weightedqty, shipmentid)

    select 200, 300, numbers.number

    from common.dbo.numbers

    cross join common.dbo.numbers n2

    where numbers.number between 2 and 10000

    go

    raiserror('Index', 10, 1) with nowait

    go

    create index IDX_CaseDetails_TotalQty on dbo.casedetails(shipmentid)

    include (qty, weightedqty, total)

    go

    set statistics io on

    set statistics time on

    print 'First Method'

    select qty, weightedqty, qty + weightedqty

    from dbo.casedetails

    where shipmentid = 1

    /*

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    Table 'CaseDetails'. Scan count 1, logical reads 82,

    physical reads 0, read-ahead reads 0, lob logical reads 0,

    lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 16 ms, elapsed time = 231 ms.

    */

    print 'Second Method'

    declare @Total int

    select @total = qty + weightedqty

    from dbo.casedetails

    where shipmentid = 1

    select qty, weightedqty, @total

    from dbo.casedetails

    where shipmentid = 1

    /*

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    Table 'CaseDetails'. Scan count 1, logical reads 82,

    physical reads 0, read-ahead reads 0, lob logical reads 0,

    lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 8 ms.

    Table 'CaseDetails'. Scan count 1, logical reads 82,

    physical reads 0, read-ahead reads 0, lob logical reads 0,

    lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 31 ms, elapsed time = 230 ms.

    */

    print 'Third Method'

    select qty, weightedqty, total

    from dbo.casedetails

    where shipmentid = 1

    /*

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    Table 'CaseDetails'. Scan count 1, logical reads 82,

    physical reads 0, read-ahead reads 0, lob logical reads 0,

    lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 226 ms.

    */

    As you'll see, I added a calculated column to the test table. Then I inserted 15,000 rows for shipping ID 1, and 100-million rows of data for 10,000 other shipping IDs. (I prefer tests in bigger tables.) Then I indexed the table.

    On my computer, populating the tables with over 100-million rows of data took quite a while, but by using a Numbers table solution, it wasn't too bad (under an hour). That might have been overkill, I have to admit. If you decide to test this, I recommend about 10-million rows. Will be easier to deal with and perfectly valid for testing.

    I didn't test the inline function, since I already know that will fail horribly.

    With the calculated column and the right index, the CPU time on these queries goes WAY down.

    The first test was a straight-up calculation, without the calculated column. It used the index, and took 16 milliseconds of CPU time and 1 table scan.

    Second test used the "assign the number to a variable" method, as outlined in the article. That used the index, took 31 milliseconds, and took 2 table scans (1 to assign the variable, 1 to select the data).

    The final test used the calculated column. That took under 1 millisecond of CPU time (too fast to measure) and 1 table scan.

    (All of these "table scans" are just the IO stats. They're actually index seeks in the execution plan.)

    Rather than using an inline function for this kind of calculation, I highly recommend non-persisted, indexed, calculated columns. I have used them for many things, and they are very fast, don't slow down insert/update/delete by too much (I guess if the calculation is complex enough and the insert/update is big enough, it might slow it down measurably, but I haven't seen that yet). Obviously, they do have some cost for those operations, and do add to the size of the index, but the cost is quite good compared to the benefit.

    This way, you don't have to count on the data being the same in all rows, you cut IO in half, and you get your select done so fast that "statistics time" can't really measure it. And that's in a table with 100-million rows.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon