• Thank you for your reply. This is driving me nuts, and I'm sure I'm not the only one out there who has run accross this issue.

    Ninja's_RGR'us (3/27/2010)


    are you using multiple threads in the application?

    Yes. How would that effect the results?


    are you using parralelism?

    Yes. I have not changed any of the default server settings, and the application is not passing query hints to set MAXDOP.


    Your query seems to basically use this equation... endtime from las query - start time from first query.

    I think a better equation would be sum(endtime - starttime). That eliminates an errand query in there before or after the real work to sku with those results.

    I'm not sure what you mean here. To get the total duration in milliseconds, I take the difference between when it started, from when it ended using the datediff function (DATEDIFF ( datepart , startdate , enddate )).


    One thing that could help explain why this happening is if the server is running at 100% for the whole time of the process and assuming the server uses all 8 availble processors and that each query only uses 1 processor, you just increased your available processing time by 8.... but not 15 obviously. This means that if you have 8 queries running in exactly 1 second each, then could all be processed in 1 second REAL TIME but adding up to 8 seconds total.

    This would make sense. I'll monitor CPU usage next time and see. Would paralleism also have the same effect? On smaller, less instensive tests, the duration time and SQL time seems to fall into place (SQL time being less than total duration time).


    One thing I first noticed back in sql 2000 (not sure if it still applies in 2005/08) is that I often had queries that run for very little time (0ms) but every once in a while like 4 or 5 runs, the server showed 13ms. I'm assuming here that there's a rounding error in there or just a weird way to calculate things.

    Even if a 3 ms error is small over each query... when compounded over 68 millions queries that makes for almost 57 hours in time.

    On SQL 2005/2008, the duration column is captured in microseconds instead of milliseconds when saved to a table (which I'm doing). So what used to show as 0ms might show as 1000 microseconds, allowing you to get a more accurate time.


    Can you check the average run time of each query? If I assume 68 M queries over 5 hours, that's only 250 nanoseconds AVERAGE. I think this could explain a great deal in your situation.

    I'm still combing through the latest test, so I'll do further analysis later today.