The default trace in SQL Server is something I have been experimenting with lately. It is a feature included in SQL Server 2005 and beyond and it is enabled by default when you install SQL Server. It has a number of uses from auditing to capacity planning and I will look at an example here.
First though lets check to see that default trace is enabled on my SQL Server instance, the following code TSQL will check for this:
--Check to see that Trace is enabled.
WHERE configuration_id = 1568
If the trace is enabled you will get results like the screen shot below:
Notice the [Value] and [Value_in_use] columns set to 1 – this means the default trace is enabled.
If it's not enabled,[Value] and [Value_in_use] columns are set to 0
If the trace is disabled the following code will enable it...You need to be a member sysadmin or serveradmin fixed server roles to run reconfigure.
sp_configure 'show advanced options', 1 ;
sp_configure 'default trace enabled', 1 ;
**CAUTION I believe if you disable this trace then certain reports in Management Studio will no longer function correctly.
This trace is stored in a file, and you have two TSQL options to get the path of the default trace file for your SQL Server instance. To quote from books online (BOL) "The sys.traces catalog view contains the current running traces on the system. This view is intended as a replacement for the fn_trace_getinfo function."
So if I run:
SELECT * FROM sys.traces
I get the following results returned (the screen shot is a subset):
You can see i have captured the path of file, the max size of the trace file 20MB, the time the trace will stop, and the maximum number of files to store. The query will in fact return information about all traces running on your instance, the default trace has an ID of 1
You can get some of this information by querying the fn_trace_getinfo function like so:
SELECT * FROM ::fn_trace_getinfo(0)
Passing the value 0 to the function returns information on all running traces. The results are:
The trace property values in the property column have the following Meaning
2 = File name
3 = Max size (MB)
4 = Stop time
5 = Current trace status - 1 running 0 - Stopped.
This contrived example looks at using the default trace to find out when your data and log files have auto-grown. Now we all know that auto-grow should only be used as a safety catch and we should be appropriately sizing our database files accordingly, so lets say your database needs auto grows, the safety catch kicks in you may want to know when and where that happened. Well this contrived example can show you how you can find out this information using the default trace in SQL Server:
Lets create a database to test this out on:
CREATE database default_trace
My model DB has the standard out of the box settings and is around 4MB in size, has auto growth turned. So the new database picks up these settings.
We then create a table to allow us to add data and grow the database files:
CREATE TABLE [dbo].[T1](
[c1] [bigint] IDENTITY(1,1) NOT NULL,
[c2] [char](4000) NULL
) ON [PRIMARY]
ALTER TABLE [dbo].[T1] ADD CONSTRAINT [DF_T1_c2] DEFAULT ('a') FOR [c2]
Next we need to add enough data to the database to force the database files to auto grow. I used the following script:
DECLARE @counter BIGINT;
SELECT @counter = 0;
WHILE (@counter < 1000)
INSERT INTO dbo.T1 DEFAULT VALUES;
SELECT @counter = @counter + 1;
Then we can run the following to pull out the auto growth statements from the default trace:
e.name as EventName
INNER JOIN sys.trace_events e
ON eventclass = trace_event_id
WHERE e.NAME LIKE '%auto grow%'
The following screen shot shows a partial result set, we can one log growth, and one data file growth. In fact though the insert caused the log to auto grow one which represents a 10% increase on 1MB and the data file to grow three times, each time adding 1mb to the data file.
I’m sure there is another post in this on the potential issues of auto-grow and why you should size you databases accordingly.