Huge log backups after Ola's defrag and maint script runs

  • I have a SQL 2008 R2 instance that's hosting several OLTP databases for LOB applications.

    Every night Ola Hallengren's index defrag script runs, followed by a full nightly backup (using Ola's backup scripts). Log backups are taken every hour from 3 am (after the nightly backup concludes) until 11 pm (right before the nightly index defrag script runs).

    Most backup log files are just a few MB, or even less than a meg. However, the 3 am log backup file often runs between 7 GB and 11 GB because of the index defrag process.

    My question is whether it would be a good idea to toggle the database recovery model to simple then back to full immediately before the nightly backup runs. The idea would be to get one last good log backup beforehand, then intentionally break the log chain right before the full backup runs since I would never need to restore prior to the latest full backup.

    What are your thoughts on this idea? Or, should I just live with the large log backup files?

    Thanks in advance for your advice,

    Andre Ranieri

  • If this is an issue for you, you could try a bulk logged first. But remember that this make a backup process more complicated and what if eg. a script that change db recovery mode fail?

  • Fully expected.

    If it's bothering you, you could do some analysis of what indexes are getting rebuilt the most and reduce their fill factors so that they fragment less and hence get rebuilt less.

    Switching to bulk-logged recovery will not help. In bulk logged recovery index rebuild are minimally logged and hence the log file itself does not need to be as large, but the log backups will be just as large as in full recovery.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • You could also play with the thresholds - I also use Ola's defrag scripts but set them to take it easy on re-organize and just go for full rebuild when needed.

    Rebuilding idxs does not hit log growth like reorganizing does.

    As Gail said try adjusting the fill factors on the more volatile idxs, so these operations will be happening less often.

    I've found useful to have a job running at night that captures the fragmentation details of all important idxs, saving them to a table. You can they query the table to see the fragmentation trends day by day. An index that fragments by 30% during a week might need a fill factor of 80 (example!) while one that increases by 5% might be fine with fill factor of 95. Nothing beats measuring and adjusting for your own workload!

    Cheers,

    JohnA

    MCM: SQL2008

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply