• seankyleprice (11/16/2012)


    I personally think the answer should be 'none of the above'. The accuracy of sysdatetime() depends on the computer hardware and version of Windows on which the instance of SQL Server is running. Queries 2 and 4 may give an accurate value for the number of nanoseconds since when your server believes midnight to have been but not 'reliably' when midnight actually was unless you are confident that the date time of your server is accurate to 100 nanoseconds.

    Excellent point.

    We can of course take it that the time referred to as midnight in the context of how long has the computer been going since midnight as being the time when it thought midnight was, but that doesn't help because reliability is still affected by the clock drift between then and now.

    Typical systems today, even those with very good hardware, despite usually having excellent clock management software that (when correctly configured, which may not be all that common) uses data from a reliable time server to determine how much drift there is in the hardware would not be able to run reliably with a drift rate between time server inputs of less that 37 microseconds per annum (that's a few decimal orders of magnitude better than typical, I think), so if "reliable to 100 nanoseconds" really means just that you are right and the correct answer is "none of the above". Even if "reliable to the nearest 100 nanoseconds" means "accurate count of the clock's nominally 100 ns ticks" a correction from the time server might have occurred during the interval and disturbed the tick count so that the result would be unreliable anyway.

    When I wrote the question I meant "reliable within the capability of the system to determine the time". I should have included those words, or something like them, in the question.

    Tom