• craig.lovegren (4/28/2009)


    I did exactly that, using the small example, you find that the datediff and dateadd are nearly identical, indexes or not. So I grabbed a large table (a few million rows) from our systems, indexed them appropriately and ran a few tests. When comparing my three tests, datediff ranked at 90%, dateadd, although a complex plan, took only 9%, and the final test took the remaining 1%.

    It sounds as if you are quoting percentage batch costs from the execution plan. If so, please be aware that these percentages are based on the optimizers cost estimates. To accurately report performance results, you need a reproducible test set up (with code) and output from SET STATISTICS IO, TIME ON as a minimum. It is also common to start with a cold cache. A posted actual execution plan is often helpful too.

    craig.lovegren (4/28/2009)


    What we typically do for tables which are queried on a datetime field religiously is to actually create another column with the date portion only. It does require an extra 4 bytes (smalldatetime) per row, but disk space is cheap. Just remember to update the column after the data load in a batch update statement, don't use triggers, computed columns, or calcs in your inserts, as this will slow your data loads down drastically.

    Well that all rather depends on your priorities. Computed columns have the distinct advantage of being guaranteed to match the source data at all times. Someone could update one of your date columns and not the other.

    I am confused as to how a computed column slows down your data loads 'drastically'. Also, simple scalar calculations in the SELECT part of an INSERT are typically so much cheaper than the INSERT operation that it makes no odds.