• In MS SQL 2005, the likelihood of hitting millisecond values of 998 or 999 is nil. Zero. It cannot happen. So, yes you could maybe guarantee proper behavior by specifying .997, but you intuitively don't want to do that -- it just smacks of a thrown-together design. So you want to stick to the 59.000 values because they look cleaner even if it leaves a one-second gap.

    I would postulate that a one-second gap is enormous in a transaction processing environment. If an application is handling even a modest number like 5000 transactions an hour, you will most likely have some time-stamped in that one-second window. Why deliberately build in a logic error? What's the objection to my suggested approach specifying a "not greater than" limit for the end time? It lets you enter a very clean (to the minute) value and is absolutely accurate with no fudging about lightening strike values or compatibility with future versions of MS-SQL that handle datetime values to a much finer level of granularity.

    This has been a stimulating discussion. I thank you for your clear and thorougn exposition of the problem and your spirited participation in our dialog.