Click here to monitor SSC
SQLServerCentral is supported by Redgate
Log in  ::  Register  ::  Not logged in
Home       Members    Calendar    Who's On

Add to briefcase

Huge log backups after Ola's defrag and maint script runs Expand / Collapse
Posted Saturday, April 20, 2013 10:18 AM


Group: General Forum Members
Last Login: Friday, October 9, 2015 3:12 PM
Points: 479, Visits: 379
I have a SQL 2008 R2 instance that's hosting several OLTP databases for LOB applications.

Every night Ola Hallengren's index defrag script runs, followed by a full nightly backup (using Ola's backup scripts). Log backups are taken every hour from 3 am (after the nightly backup concludes) until 11 pm (right before the nightly index defrag script runs).

Most backup log files are just a few MB, or even less than a meg. However, the 3 am log backup file often runs between 7 GB and 11 GB because of the index defrag process.

My question is whether it would be a good idea to toggle the database recovery model to simple then back to full immediately before the nightly backup runs. The idea would be to get one last good log backup beforehand, then intentionally break the log chain right before the full backup runs since I would never need to restore prior to the latest full backup.

What are your thoughts on this idea? Or, should I just live with the large log backup files?

Thanks in advance for your advice,

Andre Ranieri
Post #1444719
Posted Saturday, April 20, 2013 11:29 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: 2 days ago @ 4:52 AM
Points: 269, Visits: 2,393
If this is an issue for you, you could try a bulk logged first. But remember that this make a backup process more complicated and what if eg. a script that change db recovery mode fail?
Post #1444726
Posted Saturday, April 20, 2013 12:49 PM



Group: General Forum Members
Last Login: Today @ 12:12 AM
Points: 45,618, Visits: 44,146
Fully expected.

If it's bothering you, you could do some analysis of what indexes are getting rebuilt the most and reduce their fill factors so that they fragment less and hence get rebuilt less.

Switching to bulk-logged recovery will not help. In bulk logged recovery index rebuild are minimally logged and hence the log file itself does not need to be as large, but the log backups will be just as large as in full recovery.

Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1444736
Posted Sunday, April 21, 2013 12:22 AM


Group: General Forum Members
Last Login: Thursday, April 17, 2014 1:41 AM
Points: 170, Visits: 1,400
You could also play with the thresholds - I also use Ola's defrag scripts but set them to take it easy on re-organize and just go for full rebuild when needed.
Rebuilding idxs does not hit log growth like reorganizing does.

As Gail said try adjusting the fill factors on the more volatile idxs, so these operations will be happening less often.

I've found useful to have a job running at night that captures the fragmentation details of all important idxs, saving them to a table. You can they query the table to see the fragmentation trends day by day. An index that fragments by 30% during a week might need a fill factor of 80 (example!) while one that increases by 5% might be fine with fill factor of 95. Nothing beats measuring and adjusting for your own workload!



MCM: SQL2008
Post #1444779
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse