Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««23456»»

T-SQL Expand / Collapse
Author
Message
Posted Tuesday, February 17, 2009 4:48 PM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 7:40 AM
Points: 5,916, Visits: 8,168
Steve Jones - Editor (2/17/2009)
OK, my head hurts.

I've awarded back points since there is obviously some issue here. I missed the correct answer with 1/30, so I've fixed that.

As far as the other items, I've changed the colon to a period for the ms, but I get a conversion error when adding the "T" in the date format. I've also changed the answers to show one correct answer.

apologies for the delays.


Hi Steve,

No need to apologize. You do a great job, day in day out, with keeping the site interersting. Errors will slip through - if only to prove that you are still human. (You are, are you?)

I'm intrigued by the error you got when you included the T. Can you post the EXACT value you used when you got that error? (Looking back at the question and answer, I remember that it already had two formats in use - one for loading the sample data, with dashes in between the date components; and one in the query, without the dashes. A well-formed datetime constant with time component needs both: dashes between the date components, AND an uppercase T between date and time. (Plus of course the colons between the time components and the dot before the milliseconds).



Hugo Kornelis, SQL Server MVP
Visit my SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis
Post #658968
Posted Tuesday, February 17, 2009 5:28 PM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Monday, June 16, 2014 4:03 AM
Points: 488, Visits: 403
Hugo Kornelis (2/17/2009)

Finally, the datetime datatype is not inexact. It is basically a counter of the number of 1/300 second intervals since a fixed starting point (midnight of Jan 1st, 1900), so in many regards it is equivalent to an integer.

Just to clarify this, it is not actually a count of 1/300ths of a second ('ticks') since some epoch. It is an 8-byte field composed of two 4-byte integers.
The first 4 bytes represent an integer, being the days after (or before - it's signed!) 01 Jan 1900.
The second 4 bytes is the number of 1/300ths of a second ('ticks') that have passed since midnight (up to 25,920,000 of them)
For collation purposes, they can be treated as a simple 8-byte integer, but be aware of the days-ticks split.
Note that there neither timezone or daylight saving information stored
(ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/tsqlref9/html/9bd1cc5b-227b-4032-95d6-7581ddcc9924.htm)

Also, IIRC, IBM mainframes (360 & 370 anyway) updated their TOD clock every 300th of a second, to a precision of better than a microsecond. That was an interesting example of the difference between precision and accuracy!
Post #658977
Posted Tuesday, February 17, 2009 8:19 PM


SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Thursday, June 12, 2014 6:43 AM
Points: 160, Visits: 266
Hugo Kornelis (2/17/2009)

Anyhow, never indexing a datetime is not a wise decision.

More of an avoidance than a hard-and-fast rule. As an ordering method, not a problem. As a clustered index, ok. I'm a little concerned about how a clustered datetime would affect multiple users inserting at the same time. But try querying for 2/16/09 8:17pm (the time right now MST), in a table where the datetime is acting like a timestamp. It's more like indexing a real number.

As for high selectivity, I DO index GUIDs. But I would never make a user enter one!

Finally, the datetime datatype is not inexact.

Yes, dates are integers at some level. But my users' concept of dates don't extend to 3.333 milliseconds.

Now, confession time. I actually LIKE Oracle's approach of storing the time as the decimal portion of a floating point number, where it's the fraction of the day. Maybe that's where the aversion for indexing dates comes from.

Kevin


--
Please upgrade to .sig 2.0
Post #659022
Posted Wednesday, February 18, 2009 12:36 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Thursday, March 7, 2013 12:16 AM
Points: 381, Visits: 340
And actually there are two answers.

2009-01-31 00:00:00.000, 2009-01-31 00:01:01.000, 2009-01-31 12:59:59.128
2009-01-30 00:00:00.000, 2009-01-31 00:01:01.000, 2009-01-31 13:00:00.995
2009-01-30 00:00:00:000, 2009-01-31 00:01:01:127
2009-01-31 00:00:00.000, 2009-01-31 00:01:01.000, 2009-01-31 21:59:58.347
2009-02-1 00:01:01.000, 2009-01-31 21:59:58.347, 2009-01-31 23:59:59.999
2009-02-01 00:00:00:003 2009-01-31 09:01:30:347 2009-01-31 23:59:59:999
Msg 241, Level 16, State 1, Line 1 Conversion failed when converting datetime from character string.


so, moderators should not consider the question or give us a bonus point.




Ramu
No Dream Is Too Big....!
Post #659123
Posted Wednesday, February 18, 2009 1:09 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Monday, June 16, 2014 4:03 AM
Points: 488, Visits: 403
ramu.valleti (2/18/2009)
And actually there are two answers.

{emphasis changed by brewmanz}
2009-01-31 00:00:00.000, 2009-01-31 00:01:01.000, 2009-01-31 12:59:59.128
...
2009-01-31 00:00:00.000, 2009-01-31 00:01:01.000, 2009-01-31 21:59:58.347


Please explain how you got 2009-01-31 12:59:59.128 (as in, ending with .128)

I can only get ending in .127 or .130 when looking around .128
Post #659136
Posted Wednesday, February 18, 2009 1:32 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Thursday, March 7, 2013 12:16 AM
Points: 381, Visits: 340
Hi brewmanz,

You are right, i did get the ...127 and not .128.

My apologies for posting wrongly.


Regarding,
Ramu




Ramu
No Dream Is Too Big....!
Post #659144
Posted Wednesday, February 18, 2009 1:35 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 7:40 AM
Points: 5,916, Visits: 8,168
knechod (2/17/2009)
Hugo Kornelis (2/17/2009)

Anyhow, never indexing a datetime is not a wise decision.

More of an avoidance than a hard-and-fast rule. As an ordering method, not a problem. As a clustered index, ok. I'm a little concerned about how a clustered datetime would affect multiple users inserting at the same time.


No problem at all. A clustered index doesn't have to be unique. Admitted, there are drawbacks to having a nonunique clustered index, but if the benefits exceed those drawbacks, then by all means go for it. ;)

Your mentioning inserting also reminds me that I forgot to include an important benefit of clustering on a datetime - if said datetime is the moment of inserting, you'll avoid time-consuming page splits. (The same advantage applies to identity and to GUID with a default of NEWSEQUENTIALID(), but not to GUID with a default of NEWID()).

But try querying for 2/16/09 8:17pm (the time right now MST), in a table where the datetime is acting like a timestamp. It's more like indexing a real number.


If I need to find rows with that EXACT time (to the millisecond), I would filter for DateInserted = '2009-02-16T20:17:00.000'. If I need to find all rows inserted in that minute, I'm back at the old interval problem already discussed at length in this thread, except the interval is now a minute instead of a day:
WHERE DateInserted >= '2009-02-16T20:17:00.000'
AND DateInserted < '2009-02-16T20:18:00.000'

Both the exact match and the range will benefit from a clustered index. The exact match will also benefit from a nonclustered index; the range match might benefit.

As for high selectivity, I DO index GUIDs. But I would never make a user enter one!

Finally, the datetime datatype is not inexact.

Yes, dates are integers at some level. But my users' concept of dates don't extend to 3.333 milliseconds.


The reason why inexact values are generaly less useful in an index is not tied to the users' concept of the values, but to conversion and representation issues. Even though conversion from float to text and back is (probably) documented somewhere, it's too complicated to predict. And that can cause weird results:
DECLARE @a float, @b float;
SET @a = 134.7;
SET @b = 10.0;
SET @a = @a / @b;
SET @b = 13.47;
SELECT @a, @b;
IF @a = @b PRINT 'Equal' ELSE PRINT 'Inequal';

For datetime, I could make a similar repro, using a millisecond value that doesn't end in a 3, 7, or 0. But that is very predictable, and hence easily avoidable. (The floating point issue is not predictable at all - I had to try various combiniations of values in the code above before I got the desired results...)

Actually, the above is mainly a discussion on why you should not equality search on inexact values, such as floating point numbers. You can safely use range search (e.g., match all values between 13.46999 and 13.47001). And just as with datetime, an index can still be valuable even in range search operations.

Bottom line: Inexact numbers make a bad primary key column - this goes 100% for floating point numbers, and, hmmm, let's say 95% for datetime values. But depending on use, inexact numbers can be a prime candidate for a clustered index, and can also be considered for nonclustered indexes.



Hugo Kornelis, SQL Server MVP
Visit my SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis
Post #659148
Posted Wednesday, February 18, 2009 6:51 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Yesterday @ 9:46 AM
Points: 2,515, Visits: 3,708
Thanks Hugo for the explanation. I learn so much from this forum. It does seem to be a pretty odd way to handle time. Does this change in SQL 2008 with separate date and time data types?

Thanks again.
Post #659319
Posted Wednesday, February 18, 2009 7:09 AM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 11:43 AM
Points: 23,011, Visits: 31,516
I'd like to hear Hugo's answer to your question, but IMHO not much really changes in SQL Server 2008 with regard to the DATE and TIME data types. When working with time, you still will most likely need to use ranged searches in most of your queries. The DATE type, if used, would make it easier to find all records entered on a specific date, as you won't have to do a ranged search in that case.




Lynn Pettis

For better assistance in answering your questions, click here
For tips to get better help with Performance Problems, click here
For Running Totals and its variations, click here or when working with partitioned tables
For more about Tally Tables, click here
For more about Cross Tabs and Pivots, click here and here
Managing Transaction Logs

SQL Musings from the Desert Fountain Valley SQL (My Mirror Blog)
Post #659349
Posted Wednesday, February 18, 2009 7:30 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 7:40 AM
Points: 5,916, Visits: 8,168
skjoldtc (2/18/2009)
Thanks Hugo for the explanation. I learn so much from this forum. It does seem to be a pretty odd way to handle time. Does this change in SQL 2008 with separate date and time data types?

Thanks again.


Hi skjoldtc,

SQL Server 2008 adds several new data types, but doesn't change the way the current datatypes (datetime and smalldatetime) work. And everything considered, that is a good thing - you don't want your existing code to break when upgrading, right?

Many of the issues discussed here apply to the new data types (date and time, but also datetimeoffset and the hideously named datetime2) as well, though obviously not the weird 1/300 second precision - the new datatypes that include a time portion all allow the user to specify 0 to 7 digits of fractional second precision.

As Lynn says, searching for a particular day becomes easier with the date datatype, as this is now one single value so an equality search can be used (though you still will get correct results when using the range search technique). For all other datatypes, I would still continue to use the half-open interval search outlined earlier in this discussion, as it will always work even when the datatype later changes, and it saves me research time. Sure, I can save myself some typing by first checking the datatype (datetime2(5) - so that is 5 digits after the decimal) and then using BETWEEN with an end datetime value of '2009-01-31T23:59:59.99999' - but I save myself a lot more time by not having to check the datatype at all and just using < '20090201'. (And since the default time part is still midnight, I can omit the time portion in this case).

There might or might not be other, new issues with the new date/time datatypes, but I haven't played with them quite enough to find them.



Hugo Kornelis, SQL Server MVP
Visit my SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis
Post #659376
« Prev Topic | Next Topic »

Add to briefcase «««23456»»

Permissions Expand / Collapse