Log Backups don't keep up during weekend maintenance

  • We have a 600GB database that gets intelligent reindexing plus six hours of update statistics ( full scan ) every weekend. I can't get approval to do any of this on weeknights. The log backups run every 15 but often take over an hour to complete during this maintenance.

    My latest adjustment was to introduce delays between each index rebuilt or reorganized and between groups of statistics that are updated. This grows the window to about 12 hours and we're adding data on a regular basis, converting client data from a legacy system.

    No point in making log backups more frequent when long-running maintenance transactions delay them. The only things I can think of are to expand the log drive and/or flip to simple recovery during the maintenance.. Of course with Commvault running backups, it will automatically do a differential upon switch to simple recovery and then launch a full backup when my agent job flips the DB back to full recovery -- not much I can do about that.

  • some suggestions if not already done:

    improve reindex job to only run update stats on indexes where alter rebuild was not run

    do you really need to do full scans?

    put the log backup jobs (as an sp_start job command) into the logic of the reindex job to give more control over when they run

    remember update stats does not produce much log activity.

    ---------------------------------------------------------------------

  • Indianrock (11/12/2010)


    The only things I can think of are to expand the log drive and/or flip to simple recovery during the maintenance..

    One problem with the simple switch is you'll break your logchains. Make sure you're okay with that.

    Expanding the log drive is one option. You could also (if you're almost out of room) get another drive assigned and create a second log file. These are still used serially, so you won't get optimization out of it, but it will allow you to use storage space in two separate logical drives contiguously.

    Those must be some huge reindexes if you're running into that level of size problems, though. What size do you try to keep your logs at?


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • If you haven't already, you should consider a backup compression tool, like LiteSpeed, to reduce the size and run time of the log backups and regular backup.

    If you aren't doing it already, and it is possible with your table structure, consider table partitioning so that you only have to reindex the active partitions of the large tables.

    Consider reindexing some indexes one week and others the next week, especially for very large ones. There will probably not be much query performance impact from letting it go an extra week.

    Finally, if the only problem is that the log backups take a long time to complete, that isn't really a problem as long as you have enough space in your transaction log file.

  • We've found that without doing update stats with full scan the query optimizer "mis-estimates" the number of records in the index -- poor execution plans. I can't do anything with the log or other backups other than suggest to the Systems team who manages Commvault. Native sql backups can't be run or we'll screw up the log chain Commvault establishes.

    My maintenance runs on Saturday The log file is about 100GB and lately hasn't gotten over 50% full during maintenance, but that won't last -- we just added over 400k new records yesterday and will convert over a million new legacy records next weekend. The log drive is 170GB but I can easily ask for 3 times that. The real problem is how long maintenance takes. The next differential is 1am sunday morning followed by a systems maintenance window that I don't want to impact.

    The update stats routine, like the reindex routine, uses it's own stored procedure which looks for indexes or stats needing updating, so neither just update everything. Right now we reorganize indexes with 10-30% fragmentation and rebuild if over 30.

  • You don't need to do update stats if you are re-indexing a table, you only need to do that if you re-organize the indexes. When you re-index the table, the stats are automatically redone.

    Also, you can run a differential or full backup while the transaction log backup is running. They are independent of each other.

  • The update stats routine looks for stats needing updating, so I'm figuring if an index was rebuilt and the statistics related to it were updated, then the stats routine would ignore it. Part of our problem I'm sure is having indexes that help both with OLTP and OLAP queries against the same database. Data warehousing should be in our future. If we did use simple recovery, we'd have exposure from about 7am Saturday ( last diff backup completion ) and early Sunday morning when the full backup completes. I'm not exactly sure when log backups would resume if the full launches upon resumption of full recovery mode.

    Keep in mind all backups are handled by Commvault, not sql agent jobs of any kind, and Commvault makes it's own decisions about what to do based upon database events like a recovery model change.

    I have the feeling that weekly stats updates are more important than minimal index fragmentation, so I could experiment with higher frag thresholds in that job. I think our default index fill factor is 90.

  • So back to the question about whether stats are being updated on indexes that were just rebuilt. The stats proc in use runs the 2 statements below (slightly modified here to make it more clear ). Now if modifiedRows shown below is changed by an index rebuild then we could be updating stats unnecessarily. Index updating occurs before stats.

    -- Make the queue of indexes to update:

    IF OBJECT_ID('tempdb..##updateStatsQueue') IS NOT NULL

    DROP table ##updateStatsQueue;

    SELECT schemas.name AS table_schema,

    tbls.name AS table_name,

    i.name AS index_name,

    i.id AS table_id,

    i.indid AS index_id,

    i.groupid AS groupid,

    i.rowmodctr AS modifiedRows,

    ( SELECT MAX(rowcnt)

    FROM sysindexes i2

    WHERE i.id = i2.id

    AND i2.indid < 2 ) AS rowcnt,

    STATS_DATE(i.id, i.indid) AS lastStatsUpdate,

    'False' AS Processed

    INTO ##updateStatsQueue

    FROM sysindexes i

    INNER JOIN SYSOBJECTS AS tbls ON i.id = tbls.id

    INNER JOIN SYSUSERS AS schemas ON tbls.uid = schemas.uid

    INNER JOIN INFORMATION_SCHEMA.TABLES AS tl ON tbls.name = tl.table_name

    AND schemas.name = tl.table_schema

    AND tl.table_type = 'BASE TABLE'

    WHERE i.indid > 0

    AND table_schema <> 'sys'

    AND i.rowmodctr > 0

    AND ( SELECT MAX(rowcnt)

    FROM SYSINDEXES i2

    WHERE i.id = i2.id

    AND i2.indid < 2 ) > 0

    go

    SELECT top 1 * FROM ##updateStatsQueue WHERE processed = 'False'

    order by abs(modifiedrows)/( cast( rowcnt as decimal ) + 0.01 ) desc,lastStatsUpdate

  • rowmodctr counts the total number of inserted, deleted, or updated rows since the last time statistics were updated for the table, so a reindex would not affect it.

    run stats_date or show statistics afterwards to be sure.

    http://msdn.microsoft.com/en-us/library/ms190283(v=SQL.90).aspx

    ---------------------------------------------------------------------

  • switch to bulk logged rather than simple recovery, alter index rebuild is minimally logged in this mode too!

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply