Interpret Output of TSQL_Replay trace

  • Can someone please tell me which column provides the actual execution timings based on the output from TSQL_Replay trace?

  • The "Duration" column in the Profiler. In SQL 2008, it shows the timings in Microseconds (i.e. divide by a million to get seconds) and in SQL 2005, it shows the execution time in Milli Seconds. There is also a StartTime and EndTime column.

    You can export the data into a table and then manipulate data using t-sql. You can either display "Duration" Column in seconds or subtract EndTime with StartTime. Hope this help. See the image attached.

    ------------
    🙂

  • //Ravi (9/10/2013)


    The "Duration" column in the Profiler. In SQL 2008, it shows the timings in Microseconds (i.e. divide by a million to get seconds) and in SQL 2005, it shows the execution time in Milli Seconds. There is also a StartTime and EndTime column.

    You can export the data into a table and then manipulate data using t-sql. You can either display "Duration" Column in seconds or subtract EndTime with StartTime. Hope this help. See the image attached.

    Thanks for the response. Yeah i got that part, so when i replay at the bottom there are two columns ExecutionTime and ExecutionAndFetchTime, which ones should we consider as run times in test environment and is that in MS or S? Attached image has more details.

  • Incase if someone else is looking for an answer hopefully this might help. Jonathan Kehayias explained me in detail, below is the script from our email conversation.

    There is a ProfilerSpecialEvents.XML.profiler file that defines these events, it is located in the C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Profiler\TraceDefinitions on my 2012 instance. From the XML:

    <COLUMNLIST>

    <COLUMN>

    <ID>65521</ID>

    <TYPE>1</TYPE>

    <NAME>SourceRow</NAME>

    <DESCRIPTION>The original row number for the event executed</DESCRIPTION>

    <FILTERABLE>true</FILTERABLE>

    <REPEATABLE>false</REPEATABLE>

    <REPEATEDBASE>false</REPEATEDBASE>

    </COLUMN>

    <COLUMN>

    <ID>65520</ID>

    <TYPE>1</TYPE>

    <NAME>ExecutionTime</NAME>

    <DESCRIPTION>Time taken for query execution excluding result fetching</DESCRIPTION>

    <FILTERABLE>true</FILTERABLE>

    <REPEATABLE>false</REPEATABLE>

    <REPEATEDBASE>false</REPEATEDBASE>

    </COLUMN>

    <COLUMN>

    <ID>65519</ID>

    <TYPE>1</TYPE>

    <NAME>ExecutionAndFetchTime</NAME>

    <DESCRIPTION>Time taken for query execution including result fetching</DESCRIPTION>

    <FILTERABLE>true</FILTERABLE>

    <REPEATABLE>false</REPEATABLE>

    <REPEATEDBASE>false</REPEATEDBASE>

    </COLUMN>

    </COLUMNLIST>

    My guess would be that these are both in microseconds and they don’t get the same microsecond to millisecond conversion that the normal duration column does in Profiler. I’d have to stack walk the processing to validate this 100% from the callstack frames, but the numbers look to be microseconds equating to 1.2 second executions, unless you were replaying long running processes that took 20 minutes to replay which would be what the 1.2 million millisecond durations would equate to.

    Which one do you use? It depends on whether you want to include the fetch time to send the result over the network as a part of your metric or not. I’d probably rely on ExecutionTime for server side processing tracking depending on what I was trying to compare.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply