How to Schedule a SQL Trace ?

  • here's what i do:

    i create a view for any trace i create, so I can easily access it.

    here's a code example:

    --review all my traces

    select * from sys.traces

    --find the path to the actual trace file, copy and paste it to the below

    CREATE VIEW myDMLTrace AS

    SELECT * FROM ::fn_trace_gettable('c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\MyDMLtrace.trc', default)

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Thanks Lowell - that works for me superbly.

  • yes simon, the path you provide is relative to the server not your machine.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • I've been using a trace set up as above & it's been working well. The problem I now have is I need to modify the trace, but I can't find anything that tells me how to delete the trace file (either manually or in code). I keep having to create new trace files because the trace code doesn't let me overwrite the existing one. I might be missing something obvious but how do you get at trc files (or indeed any other types of files) that are on the server in order to delete them?

  • This thread has been really useful but I need to run a weekly trace (mon - fri) without causing a major performance overhead on the server. I am currently starting a trace on a seperate server to monitor sp durations every monday morning with a stop time of Friday at 17:00 and writing to a table.

    I want to script this and add it to a SQL Agent job and understand that I am going to have to write to a file and then import this into a table as a seperate task but if I run the trace on the production server will this cause overhead?

    I know the response may be to use DMV's instead but the database in question is using 2000 compatability, enforced by application vendors ofcourse!!!

    Any help will be appreciated.

    DBA Dave...

  • I'm in the same camp of not worrying about the trace impact after this article about the near-zero impact of a server side trace where someone actually compared the throughput compared to no trace vs profiler trace vs server side trace.

    take a look at how the graph from that article says the server side trace hardly affects the transactions per second: only when he starts emulating 300 users doing 1200 transactions per second do you see any deviance from no trace at all vs server side trace.

    [/quote]

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • If anything I might be more worried about I/O overhead if you try to put the trace file on your data or tempdb drive (depending on your I/O system) than general server overhead.

    Also as a general aside, I have found that SQL 2008 profiler will get the second parameter wrong when you script it out if you have file rollover turned on.

  • you should be part of the sysadmin

  • GilaMonster - Friday, October 17, 2008 9:13 AM

    Get profiler to script the trace. (File - Export). Put the script in a SQL job, make the couple of required changes (specifying a filename and the end time) and then schedule the job to run when you need it to.

    Can this be done to trace SSAS as well? If so, how?

Viewing 9 posts - 16 through 23 (of 23 total)

You must be logged in to reply to this topic. Login to reply