What are the minimum events and fields I need to capture to find the cause of log growth ?

  • Experts,

    One of our database log grows to huge size, which is unexpected.

    I would like to run a server side trace and find what causes the log file growth.

    1. What are the minimum events and fields I need to capture ?

    2. shouldn't it be OK, if I run the "Server side trace" for whole night ?

    Thanks in advance.

    Smith.

  • I'd start by looking in the Default Trace for the Autogrow events for the log files and see what time(s) they are occurring. Then I'd look at Agent jobs that might be running at those times. Often times index maintenance is the culprit in this case or some kind of bulk-load process. If there are no scheduled jobs that you can find running at those times I'd suggest starting with RPC:completed and SQLBatch:Completed events.

    Here's a query I use to monitor autogrow events:

    SELECT

    LEFT(TE.NAME, CHARINDEX(' ', TE.NAME) - 1) AS type,

    I.DatabaseID,

    I.DatabaseName,

    I.FileName,

    I.StartTime,

    I.EndTime,

    CONVERT(DECIMAL(18, 2), Duration / 1000.00) AS duration_ms,

    CONVERT(DECIMAL(18, 2), I.Duration / 1000.00 / 1000.00) AS seconds,

    I.IntegerData * 8 AS growthKB

    FROM

    sys.traces T

    CROSS APPLY sys.fn_trace_gettable(CASE WHEN PATINDEX('%Log[_]%', path)<> 0

    THEN LEFT(path, PATINDEX('%Log[_]%', path) + 2) +

    '.trc'

    ELSE T.[path]

    END, T.max_files) I

    JOIN sys.trace_events AS TE

    ON I.EventClass = TE.trace_event_id

    WHERE

    T.is_default = 1 AND

    TE.NAME IN ('Log File Auto Grow', 'Data File Auto Grow')

    Once you know what it is I'd suggest a couple of things:

    1. Determine the max size you'll need for your log files and size them appropriately.

    2. Increase the frequency of log backups during that time window.

    3. If it is a data load or index maintenance causing the growth consider switching from FULL to BULK-LOGGED recovery for the duration of that process. Remember you could lose point in time recovery during that time period.

  • Thanks a lot.

    Often times index maintenance is the culprit in this case or some kind of bulk-load process. If there are no scheduled jobs that you can find running at those times I'd suggest starting with RPC:completed and SQLBatch:Completed events.

    There's no scheduled jobs. I will start with RPC:completed and SQLBatch:Completed events.

    3. If it is a data load or index maintenance causing the growth consider switching from FULL to BULK-LOGGED recovery for the duration of that process. Remember you could lose point in time recovery during that time period.

    ..mmmm May be not possible. Mirroring is configured on that. It must be in FULL recovery only always.

    The script you have given above, I can execute and comapre the result once I gather the trace data right ?

    Thanks a lot once again.. It's helpful. Hope I will be able to narrow down to issue and resolve it.

    Smith.

  • Okay so there are no scheduled jobs on that server. Is there an SSIS package or something else on another server that is doing a data load?

    The script I provided specifically queries the default trace. If you want to query another server side trace you'd want to eliminate the reference to the sys.traces table because once the trace is stopped and closed there will not be a row in sys.traces for it. Then you need to provide the path to the trace file(s) to fn_trace_gettable(). So your query would be something like this:

    SELECT

    LEFT(TE.NAME, CHARINDEX(' ', TE.NAME) - 1) AS type,

    I.DatabaseID,

    I.DatabaseName,

    I.FileName,

    I.StartTime,

    I.EndTime,

    CONVERT(DECIMAL(18, 2), Duration / 1000.00) AS duration_ms,

    CONVERT(DECIMAL(18, 2), I.Duration / 1000.00 / 1000.00) AS seconds,

    I.IntegerData * 8 AS growthKB

    FROM

    sys.fn_trace_gettable({path to trace file(s)}, {number of files}) I

    JOIN sys.trace_events AS TE

    ON I.EventClass = TE.trace_event_id

  • Thanks.

    I was planning to import the trace data into a table and analyse.

    It's will be same as the script you have given right ?

    Thanks aagain.

    Smith.

  • Yes you can use the script I just provided to load the data into a table, but you can do all the analysis you need without using a table by just adapting the query I provided. BTW- there are a lot more columns available in the trace than what I have listed in that query. I just included those because that's what I needed to log when an autogrowth occurred.

  • Yes, I got it. Thanks a lot.

    I will try everything and keep updated.

  • Joy Smith San (2/10/2012)


    Thanks a lot.

    Often times index maintenance is the culprit in this case or some kind of bulk-load process. If there are no scheduled jobs that you can find running at those times I'd suggest starting with RPC:completed and SQLBatch:Completed events.

    There's no scheduled jobs. I will start with RPC:completed and SQLBatch:Completed events.

    3. If it is a data load or index maintenance causing the growth consider switching from FULL to BULK-LOGGED recovery for the duration of that process. Remember you could lose point in time recovery during that time period.

    ..mmmm May be not possible. Mirroring is configured on that. It must be in FULL recovery only always.

    The script you have given above, I can execute and comapre the result once I gather the trace data right ?

    Thanks a lot once again.. It's helpful. Hope I will be able to narrow down to issue and resolve it.

    Smith.

    I know it sounds stupid-simple but there is a column selection you can make in the profiler columns called "RowCount". You'd be amazed at how quickly it will help you find this particular problem provided that the query actually completes.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jack Corbett (2/10/2012)


    Okay so there are no scheduled jobs on that server. Is there an SSIS package or something else on another server that is doing a data load?

    The script I provided specifically queries the default trace. If you want to query another server side trace you'd want to eliminate the reference to the sys.traces table because once the trace is stopped and closed there will not be a row in sys.traces for it. Then you need to provide the path to the trace file(s) to fn_trace_gettable(). So your query would be something like this:

    SELECT

    LEFT(TE.NAME, CHARINDEX(' ', TE.NAME) - 1) AS type,

    I.DatabaseID,

    I.DatabaseName,

    I.FileName,

    I.StartTime,

    I.EndTime,

    CONVERT(DECIMAL(18, 2), Duration / 1000.00) AS duration_ms,

    CONVERT(DECIMAL(18, 2), I.Duration / 1000.00 / 1000.00) AS seconds,

    I.IntegerData * 8 AS growthKB

    FROM

    sys.fn_trace_gettable({path to trace file(s)}, {number of files}) I

    JOIN sys.trace_events AS TE

    ON I.EventClass = TE.trace_event_id

    Hi,

    Again back on this thread.

    I ran the profiler and got all the necessary fields, but missed "FileName".

    I should be able to find what causes log growth without "FileName", right ?

    I just queried for the rows having growthKB is greater than 0, and it shows all the rows with "RPC Completed" and Textdata as "exec sp_reset_connection'. What does it mean ?

    Checked the rowcount value and it seems to be OK. Max is below 3000.

    Thanks,

    Smith.

  • Analysed and Issue resolved by scheduling Log backup at proper intereval.

    Client applications is doing that much huge transactions.

    So frequent log backups are essential.

    Thanks to all the valuable inputs.

Viewing 10 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply