• skjoldtc (2/17/2009)


    Points schmointz... who cares?

    The takeaway is that SQL Server rounds time and don't use BETWEEN with a date range. But, this seems such a strange way to do this. What is the rationale?

    Hi skjoldtc,

    The rationale for rounding? Or for not using BETWEEN?

    For rounding, the rationale is that the internal presentation of datetime values uses two four-byte integer values, one for the number of whole days since Jan. 1st, 1900; and the other for the number of 1/300-second units (represented in textual format as .xx0, .xx3, and .xx7) after midnight. This internal presentation makes it impossible to store a value such as 23:59:59.999, so an attempt to set the time component of a datetime to this value can either cause an error or result in implicit rounding during the text-to-datetime conversion. I like the choice for rounding, since I'd hate to explain to my users that they can only enter datetime values with specific last numbers for the milliseconds. (Well, not that I expect my end users to manually key in a datetime with millisecond precision, but you get the point:)) Once the choice for rounding has been made, the next decision is to round always up, always down, or to the nearest representable value. Always down appears nice when you look at 23:59:59.999 specifically, but doesn't make much sense for any other value.

    The rationale for not using BETWEEN follows logically from the above. But that's not the only reason. Even though SQL Server has some limitations in its representation, time is in essence a continuous measure. If you want to divide such a measure in intervals, the only correct way to do this is by using half-open intervals (i.e. with = 'A' and MyString < 'B'

    Can anyone explain that?

    If you're asking for an explanation on the 1/300 millisecond precision, I must disappoint you. I can only speculate. Until a few minutes ago, I had always believed it had to do with the maximum range of numbers that can be represented in a 4-bute integer. However, I just did the math, and I found that there are less than 85 million milliseconds to a day, but over 2 billion values in a 4-byte integer, so that is one theory down the drain 😀

    My new theory is that is the heritage of some limitations of the timing hardware back in the early (Sybase) days of SQL Server.

    Do other databases handle time the same way? It seems so counter-intuitive.

    I have never heard of any other database storing time values in 1/300 second units. But I'm far from an expert on any other database, so who knows? Maybe others are even more weird? (But if I had to bet, I'd put my money on other databases having a more logical implementation... ;))


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/