Transaction log not shrinking after Maintenance Task

  • Hello there, I have a SQL Server 2008 (64bit) running on Windows Server 2008.

    I have several databases in compatibility mode (one in v8 and one in v9). I also have a maintenance task that runs every 30 minutes, and creates the Transaction logs for all databases on the server (with FULL recovery model obviously). The Transaction log files are being created on Disk, so I assumed everything was ok.

    2 weeks later, our bigger database (200GB) goes offline reporting the transaction log is full? Yet there is clearly transaction logs being created? Itโ€™s almost as if the transaction log is not being truncated after the transaction logs have been backed up to disk by the maintenance plan?

    Any ideas on why this would be happening?

    Do i need to add 'Shrink Database Task' to my 30-minute transaction log backup task?

    As a short terms measure we set the database to simple, shrunk the log then back to full, (we needed the database back online asap) so its not mega urgent, but any ideas would be very helpful ๐Ÿ˜€

    Cheers

    J.

    Knock Knock, Who's There?, sp underscore, sp underscore who?
    spid1, spid2, spid3....

  • Don't shrink the database. See - http://sqlinthewild.co.za/index.php/2007/09/08/shrinking-databases/

    If the log space isn't been reused, there's something still active in the older log records. Take a look at sys.databases. There's a column Log_reuse_wait_descr. It'll tell you what's preventing the log space from been reused.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • on running:

    SELECT NAME, compatibility_level, log_reuse_wait, log_reuse_wait_desc FROM sys.databases

    i get:

    NAME compatibility_level log_reuse_waitlog_reuse_wait_desc

    synergy 80 2 LOG_BACKUP

    telesales_remote 80 2 LOG_BACKUP

    i thought LOG_BACKUP didnt prevent truncation of the log file? Very confused :w00t:

    Knock Knock, Who's There?, sp underscore, sp underscore who?
    spid1, spid2, spid3....

  • Are you sure the log backups are running? Check that someone hasn't added the WITH NO_TRUNCATE to the backup log statement.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • The Transaction logs are being backed up, see the screenshot below, this shows several things:

    Transaction logs are being dumped every 30 minutes (and 1 ad-hoc one i ran for the ss)

    The log files remains at nearly 1GB in size...

    The log_reuse_wait_desc is 'NOTHING' (ran just after the logs where dumped)

    if its nothing, why is the log files not shrinking down? not the end of the world (i'll just keep a closer eye on it), just curious now and scared the log file will become 'full' again ๐Ÿ™‚

    Knock Knock, Who's There?, sp underscore, sp underscore who?
    spid1, spid2, spid3....

  • The log files won't shrink. Backup log just makes the space inside reusable, it doesn't reduce the size of the file.

    Personally, I would leave the file that big. If it's reaching that size through normal activity, then it needs to be that size. Whatever caused it to grow is long gone, may have been a long-running transaction. Index rebuilds are common culprits.

    If you're totally sure that the log shouldn't be that size and will never reach that size again, and it was a once-off strange operation that caused it to grow, then run a once-off shrinkfile on the log and shrink it to the size that it usually needs to be.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • The original problem was a 150GB log file on a different database which i shrunk to get working this morning (desperate measure, 500 staff screaming at me etc, you get the drill :))

    Now, I have a weekly Index rebuild task which may be bloating the log file considerably as you say. What I will do then is, alter my Weekly Rebuild index job, to Rebuild, then shrink then reorganise? Does that sound sensible?

    Knock Knock, Who's There?, sp underscore, sp underscore who?
    spid1, spid2, spid3....

  • jordonpilling (6/18/2009)


    Now, I have a weekly Index rebuild task which may be bloating the log file considerably as you say. What I will do then is, alter my Weekly Rebuild index job, to Rebuild, then shrink then reorganise? Does that sound sensible?

    No.

    Two options that immediatly come to mind:

    1) Switch the DB to bulk-logged before the index rebuild to prevent massive log growth. The log backups will still be big, but the log itself won't be.

    2) If you can't risk bulk-logged, do a shrink of just the transaction log after rebuilding the indexes.

    Rebuilding indexes then shrinking the entire database is a waste of time. When the data file is shrink all the nicely rebuild indexes will be fragmented badly.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (6/18/2009)When the data file is shrink all the nicely rebuild indexes will be fragmented badly.

    Aye, i realise that the shrink fragments the new index, thatโ€™s why i proposed a reorganise afterwards.

    On further thought, i might abandon the rebuild completley and just keep a reorganise going (once a day) (index should never go over 30% fragmentation within 24 hours anyways) and just rebuild statistics as nessasary i think.(several extremely high-traffic tables (like 500,000 rows a day being inserted and deleted)).

    Thanks for the tips mate, i'll have a good think on this one.

    Knock Knock, Who's There?, sp underscore, sp underscore who?
    spid1, spid2, spid3....

  • I did an experiment once on a modest size database. (Trans log about 500MB). I used the nightly maintenance plans to backup the database, rebuild all indexes, etc. I found the next morning that the transaction log had grown to about 3GB. (Our shop is closed after 7pm so this enlargement was not due to user activity). We also backup our translog dumps hours and I found that each one was about 2GB. 2GB times 12 hours day = 24GB of trans log dumps for one database for one day. Not too swift.

    As noted, whlle the Maintenance plans wil not shrink the log file, they certianly help it grow. In summary, much of the translog growth was not "natural sizing" but a result of the Maintenance Plans. I'm still fighting with the issue today.

    Barkingdog

  • on further investigation into this, its defiantly my Weekly Maintenance task that is causing the Log File to explode in size (Probably due to the Rebuilding of Indexes)

    Now, the database in question is 120GB, after running the Weekly Maintenance task, the log file is swooshing up to 130GB+ (it never really goes above 10GB during the week), then when the (every 30 minute) transaction log job fires, there is not enough disk space to dump the transaction log.

    Sooooo, i was thinking, do i:

    1) put the database into simple before rebuilding the task, then back to full afterwards?

    2) put the database into bulk-logged before rebuilding the task, then back to full afterwards?

    3) Leave it in Full, but shrink the log file (not the database, eg: DBCC SHRINKFILE (N'synergy_Log' , 0, TRUNCATEONLY)) after the index rebuild?

    Important to note, that when this job runs, the database is in standby mode (weekend, closed for business) so during the rebuild, there is not really much else going on / to be lost if i truncate the log.

    All would save me the full transaction logs and lack of disk space, i was just after a few opinions as to what's the best practise for this scenario?

    Cheers.

    J.

    Knock Knock, Who's There?, sp underscore, sp underscore who?
    spid1, spid2, spid3....

  • jordonpilling (6/21/2009)


    1) put the database into simple before rebuilding the task, then back to full afterwards?

    If you do that, you'll have to take a full backup after switching back to full recovery, otherwise tran log backups will fail

    2) put the database into bulk-logged before rebuilding the task, then back to full afterwards?

    That would be my preferred solution

    3) Leave it in Full, but shrink the log file (not the database, eg: DBCC SHRINKFILE (N'synergy_Log' , 0, TRUNCATEONLY)) after the index rebuild?

    Repeated shrink/grow is going to leave you with file-fragmentation and probably hundreds of VLFs, depending on your auto-grow settings. Neither's good. Also please note that TruncateOnly is not a valid option when shrinking transaction log files.

    If you do decide to shrink, don't shrink to 0. Find the size that the log needs to be for the rest of the time, add a bit for safety and shrink explicitly to that value

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (6/18/2009)


    jordonpilling (6/18/2009)


    Now, I have a weekly Index rebuild task which may be bloating the log file considerably as you say. What I will do then is, alter my Weekly Rebuild index job, to Rebuild, then shrink then reorganise? Does that sound sensible?

    No.

    Two options that immediatly come to mind:

    1) Switch the DB to bulk-logged before the index rebuild to prevent massive log growth. The log backups will still be big, but the log itself won't be.

    2) If you can't risk bulk-logged, do a shrink of just the transaction log after rebuilding the indexes.

    Rebuilding indexes then shrinking the entire database is a waste of time. When the data file is shrink all the nicely rebuild indexes will be fragmented badly.

    Gail, I'm curious if you have a series of ALTER INDEX ... REBUILD or REORGANIZE run as a batch, would that cause the log to grow more than running each one dynamically via an exec(@alter_index_query) or statically with a GO between each ALTER INDEX line?

    Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein

  • Should be about the same either way, providing none of them have an explicit transaction wrapped around the whole lot.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply