Capturing Missing Information From SQL Profiler

, 2007-09-06

While SQL Profiler is a very helpful (and even invaluable) tool, it does not always provide all of the information needed to find the root cause of certain problems. If ObjectId, ObjectName, and TextData are all empty / NULL, then all you are left with is the knowledge that the event you are looking for occurred. Knowing only that the event occurred will only help in a controlled environment where you caused the event but not in production where something is happening sporadically.

Take for instance the particularly squirrelly issue of having tempdb grow at an alarming rate, far more than usual and taking up a lot more space than usual. The problem you are dealing with here is determining what is taking up the space. This can be a difficult task since since tempdb space is not always taken up only by temp tables. Many operations use tempdb as work space explicitly, such as Static and Keyset Cursors, as well as DBCC CHECKDB. Other operations use tempdb when needed (typically when there is not enough space in memory to complete the operation or store the data), such as when using DISTINCT, ORDER BY, GROUP BY, AFTER triggers, table variables, table returned in table-valued-functions, hash joins and hash aggregate operations, and when (re)creating Indexes with the SORT_IN_TEMPB option. Additionally, in SQL Server 2005, new features will also use tempdb, such as snapshot isolation level transactions and Multiple Active Results Sets (MARS). And there may be even a few more. That's a lot of possibilities of what can be contributing to tempdb growth!

So first we check to see if that data is in tables or indexes (as that might point to a source of the problem) via the reserved column of tempdb.dbo.sysindexes (be sure to divide the value in that field by 128.0 to see how much it is in Megabytes). Of course, it is very possible that very little of the data that is causing the problem is in tables given how many operations use tempdb. If this is the case we have to find out what is using tempdb. For this we can run SQL Profiler and trap events such as: Sort Warnings (indicates that a sort could not fit in memory) and Data File Auto Grow. Sort Warnings will show us many of the operations that are using tempdb in general and the Data File Auto Grow will indicate which operations are using so much data that the database has to increase the size of tempdb. Unfortunately, the SQL (statement or object) that caused either event is not recorded in the Profiler output; both TextData and ObjectId fields are empty. How unhelpful is that?

Triggers, Temp Tables, and Input Buffers, Oh My!

All is not lost, though, as we can augment the Profiler results saved to a table; we will just add a trigger to store information in the TextData field. While it doesn't matter entirely which fields you choose to report on, you must at least pick: EventClass, DatabaseID, SPID, and TextData. You should also select: EventSubClass and ApplicationName. When you run the trace, be sure to save the results to a table. After you click on "Run" and the trace starts, execute the following (assuming that you have replaced {db_name_here} in the USE statement with whatever database you are storing the trace results in). The script assumes a table name of TraceTable to store the results in. Just change the name in the ON and FROM clauses if you choose a different table name.

Please note that this has only been tested on SQL Server 2000.

USE [{db_name_here}]

GO

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE TRIGGER [dbo].[TraceTableExtraInfo]

  ON [dbo].[TraceTable]

  AFTER INSERT

AS

BEGIN

  SET NOCOUNT ON;

  DECLARE @SQL VARCHAR(40)

  SELECT @SQL = 'DBCC INPUTBUFFER(' + CONVERT(VARCHAR(10), ins.SPID) + ')'

  FROM   inserted ins

  CREATE TABLE #DBCCInfo (EventType NVARCHAR(30), Parameters INT, EventInfo NVARCHAR(255))

  INSERT

  INTO #DBCCInfo (EventType, Parameters, EventInfo)

    EXEC(@SQL)

  UPDATE tt

  SET    tt.TextData = info.EventInfo

  FROM   dbo.TraceTable tt

  INNER JOIN  inserted ins

         ON   ins.RowNumber = tt.RowNumber

  CROSS JOIN  #DBCCInfo info

  WHERE  tt.TextData IS NULL

  DROP TABLE #DBCCInfo

END

You can then see the data from the trace with the following code:

SELECT * FROM working.dbo.TraceTable

Now there are two minor issues with this approach:

  1. You have to create this trigger each time you run the trace. This includes each time you re-run the trace, even when saving to the same table, as it will drop and recreate the table each time. The dropping of the table in turn drops the trigger.
  2. The output from DBCC INPUTBUFFER is not guaranteed to be accurate; it might be what is in the buffer after the event was caught by SQL Profiler and the row was inserted into the table. For the operations of Sort Warnings and Auto Grow it seems safe enough to assume that in the majority of cases the process that triggered that event will still be running by the time DBCC INPUTBUFFER runs since the processes needed the resource (either tempdb in the case of a Sort Warning or more File space in the case of an Auto Grow). But for the most part, if there is a real problem then you will clearly see a pattern between the event and what is in the buffer when the event is triggered.

Please note that while the example situation shown here is looking for tempdb and file growth issues, this method can be used on any Profiler event that does not return data in the TextData field. For those events that do return TextData values there is the WHERE condition in the UPDATE statement that ensures that if the TextData field is populated by Profiler, it will not be over-written by this trigger.

Rate

4 (9)

Share

Share

Rate

4 (9)

Related content

Trace-scrubbing Tools

Andrew Zanevsky shares his trace-scrubbing procedures that make it easy for you to handle large trace files and aggregate transactions by type–even when captured T-SQL code has variations.

SQL Server Profiler is a veritable treasure trove when it comes to helping DBAs optimize their T-SQL code. But, the surfeit of riches (I'm reminded of the Arabian Nights tale of Aladdin) can be overwhelming. I recently had one of those "sinking" feelings when I first tried to make sense of the enormous amount of data collected by traces on a client's servers. At this particular client, the online transactions processing system executes more than 4 million database transactions per hour. That means that even a 30-minute trace that captures "SQL Batch Completed" events results in a table with 2 million rows. Of course, it's simply impractical to process so many records without some automation, and even selecting the longest or most expensive transactions doesn't necessarily help in identifying bottlenecks. After all, short transactions can be the culprits of poor performance when executed thousands of times per minute.

2005-01-11

2,003 reads

Basics of C2 Auditing

Auditing in SQL Server is something that is becoming more and more prevelant, and will likely become more regulated over time. Dinesh Asanka brings us a new article on the basics of SQL Servers most powerful auditing configuration, C2 level auditing.

4.88 (8)

2004-11-03

28,587 reads

Monitoring Drive and Database Free Space

SQL Server will autogrow your databases as they run out of space. But the process doesn't manage space, nor does it check the free space on the drive. Allowing autogrow to grow unchecked and unmanaged will eventually use up all the free disk space and potentially crash your server. New author Mark Nash brings you his system for monitoring space usage and generating a report that eases this process.

4.4 (5)

2006-08-25 (first published: )

43,136 reads