Transaction log is full , help...

  • Hi All ,

    I run the Rebuild/Reorganize indexes process in the production every night . Then this morning when I check the history of the maintenance plan of 1 database , lets say DB1 , It came up with an error :

    Executing the query "/*************************************************..." failed with the following error: "The transaction log for database 'DB1’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

    I run the query :

    SELECT name,log_reuse_wait_desc FROM sys.databases Where name = 'DB1'

    The result is :

    namelog_reuse_wait_desc

    DB1NOTHING

    I run backup transaction log for every 2 hours .

    Any idea how to solve this issue ?

    Many thanks!

    Cheers

  • Unless you checked the log_reuse_wait_desc at the time of the error it likely cleared up by the time you got into the office to check it. It was likely just going to show "LOG BACKUP" anyway.

    Rebuilding and reorganizing indexes can be a high amount of log activity depending on size of tables and such within your database. If you have your log file set to a specific size limit you should look at adjusting that size to handle the workload for your maintenance task. You do have the option of offloading some of that activity to tempdb, but obviously this means it could grow as well.

    Another option would be to increase the frequency of your log backups during this maintenance window but I would expect growing the log file may still be required.

    If you are open to custom scripts that only rebuild or reorganize the indexes that actually need it you might try Ola Hallengren's index maintenance scripts[/url]. They offer a bit more control over index maintenance.

    Shawn Melton
    Twitter: @wsmelton
    Blog: wsmelton.github.com
    Github: wsmelton

  • Shawn Melton (8/24/2014)


    Unless you checked the log_reuse_wait_desc at the time of the error it likely cleared up by the time you got into the office to check it. It was likely just going to show "LOG BACKUP" anyway.

    Rebuilding and reorganizing indexes can be a high amount of log activity depending on size of tables and such within your database. If you have your log file set to a specific size limit you should look at adjusting that size to handle the workload for your maintenance task. You do have the option of offloading some of that activity to tempdb, but obviously this means it could grow as well.

    Another option would be to increase the frequency of your log backups during this maintenance window but I would expect growing the log file may still be required.

    If you are open to custom scripts that only rebuild or reorganize the indexes that actually need it you might try Ola Hallengren's index maintenance scripts[/url]. They offer a bit more control over index maintenance.

    Thank you. That's a very good response from you!

    Hmm I would like to know about the trans log size that I should adjust ? the drive in database and tempdb is the same so i dont think if i set SORTTEMP=on will help ....

    What do you think if I Increase the frequency of backup from 2 hours become 1 hour ?

    Thanks

  • WhiteLotus (8/24/2014)[hrWhat do you think if I Increase the frequency of backup from 2 hours become 1 hour ?

    I have no clue, that is something you would have to test.

    Shawn Melton
    Twitter: @wsmelton
    Blog: wsmelton.github.com
    Github: wsmelton

  • How long does it take to rebuild your indexes. You would at least want to get a log backup at the beginning and end of that process (no, I'm not suggesting making that a part of the process, just that the frequency of your backups should reflect that). Once every two hours is pretty light for most OLTP systems.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Do you check the index fragmentation for a rebuild (http://msdn.microsoft.com/en-us/library/ms189858.aspx )?. This way you will not rebuild all the index. Your logs might be growing big if you have very large table or doing ONLINE index rebuild. Its also quite possible that your estimated size of log is also small.

  • Grant Fritchey (8/25/2014)


    How long does it take to rebuild your indexes. You would at least want to get a log backup at the beginning and end of that process (no, I'm not suggesting making that a part of the process, just that the frequency of your backups should reflect that). Once every two hours is pretty light for most OLTP systems.

    Thank you for your response ! appreciate it !

    The duration is around 1 min for rebuild/reorganize .

    I did trans log backup at 2 am , rebuild process at 2:30 am , log backup again at 4 am ....

  • TheAccidentalDBA (8/25/2014)


    Do you check the index fragmentation for a rebuild (http://msdn.microsoft.com/en-us/library/ms189858.aspx )?. This way you will not rebuild all the index. Your logs might be growing big if you have very large table or doing ONLINE index rebuild. Its also quite possible that your estimated size of log is also small.

    Thanks for your response. Appreciate it !

    I don't rebuild all indexes for sure .

    i hv a custom script that limit the number of rebuild

    above 50 % of fragmentation , I will do Rebuild , otherwise Reorganize ....

  • If it only takes one minute to rebuild your indexes, we're talking very small data sets. It sounds like you have a growth limit placed on your log file or it's on a very tiny disk. Either way, a more frequent log backup should help, but you might want to consider giving yourself a little head room with a larger file or a larger disk.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • As Grant mentioned, you can increase the log back up frequency. Can you do one at 3AM?. Do you have any batch jobs running? If so, its quite possible you have a long running transaction as well. You can do any of these following:

    1. Frequent log back ups

    2. Change the job time for index rebuild if there is any other batch jobs

    3. Split the index job & run at different timings

    4. If nothing works, request increase in space for log files.

  • TheAccidentalDBA (8/26/2014)


    As Grant mentioned, you can increase the log back up frequency. Can you do one at 3AM?. Do you have any batch jobs running? If so, its quite possible you have a long running transaction as well. You can do any of these following:

    1. Frequent log back ups

    2. Change the job time for index rebuild if there is any other batch jobs

    3. Split the index job & run at different timings

    4. If nothing works, request increase in space for log files.

    Thanks!! I will try

  • As Grant suggested, did you check for any growth restrictions for ldf file?

    Next time if same happens and you know which db log is getting full, try to create a second log file in another drive if you have free space, this way the job wont fail.

    Let me know your thoughts.

    Regards,
    SQLisAwe5oMe.

  • You can check the log increase size by running the query below. This T-SQL is part of a script that Patrick Akhamie wrote. I don't have his URL but I modified it a little for our environment. Don't want to leave him out and not give credit for it. But it works great for me.

    This will give you an idea of how often the log file grows and to what size.

    USE [msdb]

    GO

    -- Email the Auto-growth events that have occurred in the last 24 hours to the DBA

    -- This script will email DBA if a auto-grow event occurred in the last day

    -- Written by: Patrick Akhamie

    -- Date: 10/06/2011

    DECLARE @filenameNVARCHAR(1000);

    DECLARE @bcINT;

    DECLARE @ecINT;

    DECLARE @bfnVARCHAR(1000);

    DECLARE @efnVARCHAR(10);

    -- Get the name of the current default trace

    -- which captures key information including auditing events, database events, error events, full text events,

    --object creation, object deletion and object alteration.

    SELECT @filename = CAST(value AS NVARCHAR(1000))

    FROM ::fn_trace_getinfo(DEFAULT)

    WHERE traceid = 1 AND property = 2;

    SELECT @filename

    -- rip apart file name into pieces

    SET @filename= REVERSE(@filename);

    SET @bc= CHARINDEX('.',@filename);

    SET @ec= CHARINDEX('_',@filename)+1;

    SET @efn= REVERSE(SUBSTRING(@filename,1,@bc));

    SET @bfn= REVERSE(SUBSTRING(@filename,@ec,LEN(@filename)));

    -- set filename without rollover number

    SET @filename = @bfn + @efn

    SELECT

    td = ftg.StartTime, '',

    td = te.name, '',

    td = DB_NAME(ftg.databaseid), '',

    td = Filename, '',

    td =(ftg.IntegerData*8)/1024.0, '',

    td = (ftg.duration/1000)

    FROM ::fn_trace_gettable(@filename, DEFAULT) AS ftg

    INNER JOIN sys.trace_events AS te ON ftg.EventClass = te.trace_event_id

    WHERE (EventClass = 92 -- Date File Auto-grow

    OR EventClass = 93) -- Log File Auto-grow

    AND StartTime > DATEADD(dy,-10,GETDATE()) -- Less than 1 day ago

    ORDER BY StartTime

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply