Stairway to Server-side Tracing - Step 3: Creating a SQL Trace Using SQL Server Profiler

  • Comments posted to this topic are about the item Stairway to Server-side Tracing - Step 3: Creating a SQL Trace Using SQL Server Profiler

  • Nice Article, but just a simple question. Why the date is in the future: Stairway to Server-side Tracing - Step 3: Creating a SQL Trace Using SQL Server Profiler

    By Dan Guzman, 2011/04/20

  • I honestly don't know why the date is in the future. Maybe it's because SQLServerCentral.com figured I'd have a better chance meeting a deadline 3 monthos out 🙂

  • Nice article. I'm really enjoying this series.

    Just one little issue with listing 4 the last line of which should read ',@status = 2 ;-- delete trace'

  • Nice one...

  • I'd like to hear some feedback on how to handle it when you trace your production box, say main database is called Production, then try to replay those trace files on a development box where a copy of the database is called Test and has a different databaseID.

    Some brief research makes it look like you can load the trace files into a table, from Profiler, then update either the database id and/or database name in the trace table, before trying replay.

  • I use this method for a time and i continue thinking that is the most friendly for newby database developers to debug.

    Exelent explanation.

  • Indianrock (7/24/2013)


    I'd like to hear some feedback on how to handle it when you trace your production box, say main database is called Production, then try to replay those trace files on a development box where a copy of the database is called Test and has a different databaseID.

    Some brief research makes it look like you can load the trace files into a table, from Profiler, then update either the database id and/or database name in the trace table, before trying replay.

    That is correct - open the server-side trace file(s) in Profiler and save to a table so that you can modify trace data prior to replay. Alternatively, create and load a table with the same schema Profiler expects for the replay. Example code below. Note that you may have database names embedded in the TextData so make sure you change those too.

    SELECT

    IDENTITY(int, 0, 1) AS RowNumber

    ,EventClass

    ,BinaryData

    ,DatabaseID

    ,NTUserName

    ,NTDomainName

    ,HostName

    ,ClientProcessID

    ,ApplicationName

    ,LoginName

    ,SPID

    ,StartTime

    ,EndTime

    ,Error

    ,DatabaseName

    ,RowCounts

    ,RequestID

    ,EventSequence

    ,IsSystem

    ,ServerName

    ,TextData

    ,EventSubClass

    ,Handle

    INTO dbo.replay_trace

    FROM fn_trace_gettable(N'C:\Traces\ReplayTrace.trc', default);

  • Thanks !! I did dump a trace file into a trace table, updated just the database name ( since the database IDs were the same on prod and dev ) and replayed from the table.

    What I've been asked to do is test sql 2012 compression on some of our busiest tables to see if it improves IO. I'm still thinking it might be easier to find some of our most intensive select statements, drop data buffers and plan cache, record the runtime of the "select" before compression and after --- instead of trying to accomplish the same via profiler trace replays.

    My manager thinks, due to previous experience using the profiler GUI on a production stock market-related database, that profiler doesn't "always hose production."

    I'm sticking with server-side because we have seen in the past that unless dialed down to a very few events with filters, and run just for the duration of one problematic query, it does hose production. So far server-side, using the tsql replay template, has no noticeable affect on production. I'm dumping 100mb trace files to a spot on our Netapp filer in production, and it creates a new 100mb file about every 5 seconds.

    Given that, I don't think we're going to be able to trace an entire business days traffic and replay it on a much less powerful QA box.

  • My manager thinks, due to previous experience using the profiler GUI on a production stock market-related database, that profiler doesn't "always hose production."

    A properly filtered Profiler trace won't impact production much. Although your manager's experience may vary, I have personally seen a DBA unwittingly crash a production stock market-related database with an unfiltered Profiler trace of several thousand events per second. Even if the server doesn't become entirely unresponsive, a Profiler trace of high-volume events will slow response time and throughput considerably.

    Level 10 of this Stairway details the performance difference of a T-SQL Replay trace using Profiler versus a server-side trace. I think you are wise to stick with a server-side trace for this task 🙂

  • Are the scripts created with SQL Server Profile 2012 backward compatible?

    I.e. can we create them on our 2012 and run them on a customer on 2008 R2 or maybe even 2005?

  • The trace scripts included in this Stairway are compatible with SQL Server 2005 through 2012. However, note that the GroupID event column (returned as the last column by fn_trace_gettable) was introduced in SQL Server 2008.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply