Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 

The Scary DBA

I have twenty+ years experience in IT. That time was spent in technical support, development and database administration. I work forRed Gate Software as a Product Evangelist. I write articles for publication at SQL Server Central, Simple-Talk, PASS Book Reviews and SQL Server Standard. I have published two books, ”Understanding SQL Server Execution Plans” and “SQL Server 2008 Query Performance Tuning Distilled.” I’m one of the founding officers of the Southern New England SQL Server Users Group and its current president. I also work on part-time, short-term, off-site consulting contracts. In 2009 and 2010 I was awarded as a Microsoft SQL Server MVP. In the past I’ve been called rough, intimidating and scary. To which I usually reply, “Good.” You can contact me through grant -at- scarydba dot kom (unobfuscate as necessary).

TSQL Tuesday #15–Automation in SQL Server

tsql2sdayAutomation is the separation point for the professional DBA from the amateur. That makes this a very important topic. Thanks to Pat Wright (blog|twitter) for coming up with something great to write about and hosting the event.

I recently wrote an article for the SQL Spackle series over on SQL Server Central on how to “Set Up and Schedule a Server Side Trace.” That covers well what to do to set up a trace on your system so that you can automate it. But I think I left out a few details that I think are worth pointing out here.

The basics on creating the Server Side Trace using Profiler and scheduling it using SQL Agent are well covered in the article. The extra areas I want to address are in regards to scheduling and file management.

For scheduling, as the article points out, you schedule the start time using SQL Agent, and set the end time using a variable within the trace script. But, I don’t suggest how long you should be running these things. Some people will disagree with this, but I think that on production systems, especially ones that need lots of tuning and tender loving care, you should run the trace 24/7. Or rather, almost 24/7. You could start a trace, not give it a stop time, and then just manage it forever, no worries, just lots of work. A few problems come up from this. What happens when you need to restart the server? If you’re applying service patches, or upgrading machines or something, you either have to kill the process, or let it get killed when the server reboots. And then, you’ll have to manually restart it to get the trace going again. Or, you can schedule the trace to start and stop every 23 hours and 59 minutes. Then, it runs, all day, every day (almost) and disabling it for a reboot experience is simply a matter of turning it off at the Agent. Or, if you don’t turn it off, you let it die with the reboot. Restarting just requires turning it back on at the Agent or waiting for the schedule to restart the event.

When you collect trace data, the best way to collect it is to a file. It’s fast and won’t affect your server in most cases. But how big should it be? Now that’s a balancing act. If it’s too big, you can actually affect the performance. It does take longer to write to a larger file. If it’s too small you run into the problem of having hundreds and thousands of little files to manage. My best offer here, figure out what works best for you. I like a number around 25mb. This is big enough that on most systems I would only have to deal with 8-10 files a day, but small enough that I get the files freed up by the trace as it goes into rollover every three or four hours. But this is really a case where your mileage may vary.

Just a couple of notes that hopefully help you make decisions on how best to automate your own performance metric data collection.

Comments

Posted by Jason Brimhall on 9 February 2011

Time to re-read that article.

Leave a Comment

Please register or log in to leave a comment.