log file growth

  • i need a report to find the log file growth in GB on weekly basis for the last month , can any one help on this ...

  • ramyours2003 (12/2/2016)


    i need a report to find the log file growth in GB on weekly basis for the last month , can any one help on this ...

    There are a few factors that Need to be thought about here:

    1. Which recovery model is in use?

    2. Are you actually experiencing growth or White space changes in the transaction log itself?

    3. Are you shrinking the file regularly and experiencing rapid growth soon afterwards?

    You need to remember that although the log file might be 5GB is size(for example), it may noly be 1% filled with data so that further operations don't have to resize the file.

    Log file "growth" in this sense is erroneous. You need to take a baseline size of the log and note how often it changes and what is happening inside the database to necessitate that growth.

  • ramyours2003 (12/2/2016)


    i need a report to find the log file growth in GB on weekly basis for the last month , can any one help on this ...

    Weekly basis may not be detailed enough to really understand what's going on in your transaction log. For example, I know the thing that is most likely to increase the size of my transaction logs in my production environment is the SQL Agent job I have that does index maintenance, which I only run once a week.

    If you're database is in full recovery mode, then you may be able to analyze the size of your transaction log backups to get a better picture of when it is happening, then work from there.

    -- backup history per database

    SELECT bs.backup_set_id, bs.database_name, CASE bs.type WHEN 'D' THEN 'Database' WHEN 'I' THEN 'Differential' WHEN 'L' THEN 'Log' WHEN 'F' THEN 'Filegroup' WHEN 'G' THEN 'Diff Filegroup' ELSE bs.type END AS backup_type,

    bs.is_copy_only AS is_copy, bs.backup_start_date, bs.backup_finish_date, DateDiff(minute, bs.backup_start_date, bs.backup_finish_date) AS backup_min,

    bs.name, bs.description, mf.physical_device_name, bs.user_name, bs.backup_size, bs.compressed_backup_size, bs.first_lsn, bs.last_lsn, bs.checkpoint_lsn, bs.database_backup_lsn

    FROM msdb.dbo.backupset bs

    INNER JOIN msdb.dbo.backupmediafamily mf ON bs.media_set_id = mf.media_set_id

    WHERE bs.database_name = 'your_db_name'

    AND bs.backup_start_date >= '2016-01-01'

    AND bs.type = 'L'

    ORDER BY bs.backup_set_id desc

  • DECLARE @filename NVARCHAR(1000);

    DECLARE @bc INT;

    DECLARE @ec INT;

    DECLARE @bfn VARCHAR(1000);

    DECLARE @efn VARCHAR(10);

    -- Get the name of the current default trace

    SELECT @filename = CAST(value AS NVARCHAR(1000))

    FROM ::fn_trace_getinfo(DEFAULT)

    WHERE traceid = 1 AND property = 2;

    -- rip apart file name into pieces

    SET @filename = REVERSE(@filename);

    SET @bc = CHARINDEX('.',@filename);

    SET @ec = CHARINDEX('_',@filename)+1;

    SET @efn = REVERSE(SUBSTRING(@filename,1,@bc));

    SET @bfn = REVERSE(SUBSTRING(@filename,@ec,LEN(@filename)));

    -- set filename without rollover number

    SET @filename = @bfn + @efn

    -- process all trace files

    SELECT

    ftg.StartTime

    ,te.name AS EventName

    ,DB_NAME(ftg.databaseid) AS DatabaseName

    ,ftg.Filename

    ,(ftg.IntegerData*8)/1024.0 AS GrowthMB

    ,(ftg.duration/1000)AS DurMS

    FROM ::fn_trace_gettable(@filename, DEFAULT) AS ftg

    INNER JOIN sys.trace_events AS te ON ftg.EventClass = te.trace_event_id

    WHERE ftg.EventClass = 93 and startTime>dateadd(month,-1,getdate())

    ORDER BY ftg.StartTime

  • To make things easier you could have a script that runs on a schedule and records data that will be stored in a permanent table. I use the following:

    --HOLDING TABLE FOR DBBC RESULTS

    create table tbl_logspace (

    RecordDate datetime default getdate()

    , DatabaseName varchar(100)

    , LogSizeMB real

    , LogSpaceUsedPct real

    , Status int

    );

    --POPULATE THE HOLDING TABLE WITH CURRENT DATA

    insert tbl_logspace (

    DatabaseName

    , LogSizeMB

    , LogSpaceUsedPct

    , Status)

    exec('dbcc sqlperf(logspace)');

    --VIEW THE CONTENTS

    select * from tbl_logspace;

    This makes it easier to recover data without having to write complex scripts and you can view the Information based on any time range you define.

    Put the INSERT into an Agent Job, schedule it and off you go.....

    Regards,

    Kev

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply