Question on bloated Transaction Log after doing index maintenance.

  • I would appreciate some input on this one.

    I have an Enterprise Class database (SQL Server 2008 R2) that I perform Index Maintenance on nightly (non peak hours using Michelle Ufford's dbo_indexDefrag_sp script -- I can provide this if you like as she notes in it that transaction log should be monitored closely). This part works great! However, when the script completes, the transaction log is bloated to about 35 to 40GB. I backup the transaction log every 30 minutes to the SAN and have allocated about 150GB to the backup drive. This is where the issue is, as I am bumping up against disk restrictions each night. I don't want to ask my SAN administrator for more disk as this system has been expanded so many times already.

    There is no activity on the system when I do the maintenance. So here's my question. Should I add some steps to the job to alter the recovery model to simple and shrink the log file, then back to full recovery model and then do a full backup? This would shrink my log and allow the transaction log backups to pick up from my full backup.

    My fear is that it may cause some holes in my recovery process should I ever need to do a restore. I would appreciate the experts opinions. Or what would you suggest? I appreciate all of your advice.

  • If you are going to modify the recovery model, don't go to the simple recovery model, you will break the log chain. Instead, change to BULK_LOGGED recovery model during the index maintenance period and then back to full recovery model when completed. Just remember that you won't be able to do a point in time recover from the t-logs taken during this time.

  • Greg.Jackson (4/12/2012)


    So here's my question. Should I add some steps to the job to alter the recovery model to simple and shrink the log file, then back to full recovery model and then do a full backup? This would shrink my log and allow the transaction log backups to pick up from my full backup.

    No, that's a very bad approach. Log growths are expensive and will just slow the DB down and the log will just have to regrow every night. You'll be slowing your maintenane down and probably causing all sorts of log fragmentation

    Few options:

    Leave it. That would be my preference. If the log reaches 40GB during regular maintenance then it needs to be 40GB. It won't grow another 40GB the next night, the space will be reused.

    Combine the 'leave it' option with targetting your index maintenance. Rebuild just what is fragmented not everything. This means not using maintenance plans. See SQLFool's index maintenance script or Ola Hallengren's.

    If 40 GB really just is too big and the log reaching that size is causing problems, then consider bulk-logged recovery for the duration of the index rebuilds, if your application can accept the associated risks. Check first, don't assume.

    Finally, don't shrink data or log regularly. Ever.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I'm aware that the shrink was a bad idea. It just appears that the log file is 40GB and not getting re-used but just continues to extend due to the activity on the system. If I leave it, it continues to extend larger and larger. The file system for the logs is already at 120GB and there have been times that the log file has gotten that big.

    I'll check the script that you suggest.

  • The log will not continue to grow unless you have something else wrong.

    Please read through this: http://www.sqlservercentral.com/articles/Transaction+Log/72488/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Greg.Jackson (4/12/2012)


    I would appreciate some input on this one.

    I have an Enterprise Class database (SQL Server 2008 R2) that I perform Index Maintenance on nightly (non peak hours using Michelle Ufford's dbo_indexDefrag_sp script -- I can provide this if you like as she notes in it that transaction log should be monitored closely). This part works great! However, when the script completes, the transaction log is bloated to about 35 to 40GB. I backup the transaction log every 30 minutes to the SAN and have allocated about 150GB to the backup drive. This is where the issue is, as I am bumping up against disk restrictions each night. I don't want to ask my SAN administrator for more disk as this system has been expanded so many times already.

    There is no activity on the system when I do the maintenance. So here's my question. Should I add some steps to the job to alter the recovery model to simple and shrink the log file, then back to full recovery model and then do a full backup? This would shrink my log and allow the transaction log backups to pick up from my full backup.

    My fear is that it may cause some holes in my recovery process should I ever need to do a restore. I would appreciate the experts opinions. Or what would you suggest? I appreciate all of your advice.

    Changing recovery model won't necessarily help as Michelle's excellent script can rebuild or reorg and as we all know a reorg is unaffected by the recovery model switch, it is always fully logged and probably what's causing the t-log to bloat 😉

    My advice initially, is to look at increasing the frequency of the log backups, every 30 minutes may not be enough!

    Have a separate schedule on the log backup job that runs every 10 mins in between the normal time of the current schedule, you may even need to run it every 2 or 3 minutes, it depends!

    e.g.

    current log backup schedule runs every 30 mins starting 00:00, this means on the hour and half past the hour.

    The index maint job starts at 20:00pm

    Set a second schedule that starts at 20:05pm and runs every 10 minutes til 23:25pm

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Thank you, guys. I changed the frequency to every 15 minutes and now my log_reuse_wait_desc is finally set to NOTHING. So it looks like the Log will finally get reused.

    Excellent ideas!

    I also modified the log backups to run every 15 minutes between 3AM and midnight and every 3 minutes between 12:01 AM and 2:59AM.

    Thanks for the assistance!

  • Greg.Jackson (4/12/2012)


    Thank you, guys. I changed the frequency to every 15 minutes and now my log_reuse_wait_desc is finally set to NOTHING. So it looks like the Log will finally get reused.

    Excellent ideas!

    I also modified the log backups to run every 15 minutes between 3AM and midnight and every 3 minutes between 12:01 AM and 2:59AM.

    Thanks for the assistance!

    😉

    remember to monitor and adjust the log sizes and backup frequency as necessary.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Will do.

    Thanks again everyone!

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply