• It is the same precision, but can you get

    7:00:00.000

    7:00:00.000

    7:00:00.003

    7:00:00.003

    7:00:00.007

    7:00:00.007

    On my desktop (dual core, 2GHz), I get:

    .200 (repeated over 100 times)

    .310 (repeated, difference of 110ms)

    .373 (repeated, difference of 63ms)

    .467 (repeated, difference of 94ms)

    On our SQL 2K server, 2x2.39GHz

    .653

    .843

    .030 (next second)

    .187

    I think this is something to do with the client and batching rather than precision. If I do this server side, I get:

    .247

    .263

    .280

    .293

    .310

    .327

    .340

    multiples of each, alternating between 13 and 17ms. About in counts of 20-25 for each value. Code:

    create table logger( mydate datetime)

    go

    DECLARE @I INT

    SET @I = 1

    WHILE @I < 10000

    BEGIN

    insert logger select getdate()

    SET @I = @I + 1

    END

    go

    select * from logger

    My guess is that in a tight loop like this one, there's still delays as the CPU switches over, the IO catches up (think two sets of writing (log + data) and maybe other efficiencies.

    SQL Server isn't designed as a real time data capture. It will log to the thousandth of the second (within the 0, 3, 7 values), but it can't necessarily log every ms. You need a real-time capture device to get this and then insert into SQL Server. Even if you send stuff that fast, it won't necessarily insert and be committed in that speed.