tran log backup confused 2.5gb log file but 70gb log backup size

  • I'm thinking of just running the standard ALL FULL update statistics I found in the Maintenance Plan where I would modify this to ALL but do you think there is benefit to running this from the MaintenanceSolution?

    If so what is your statistics settings recommendation?

  • You're welcome. I would let Ola's script handle the updating of stats using @UpdateStatistics = ALL, @OnlyModifiedStatistics = Y and @StatisticsSample = 100. Note these settings will do a full scan of all modified stats so could take a long time on a VLDB but you'll give the optimizer the best possible info it can get for estimating execution plans.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • ok I modified the top of the SP found under master dbo.indexoptimize

    -- @UpdateStatistics nvarchar(max) = NULL,

    @UpdateStatistics nvarchar(max) = ALL,

    -- @OnlyModifiedStatistics nvarchar(max) = 'N',

    @OnlyModifiedStatistics nvarchar(max) = 'Y',

    -- @StatisticsSample int = NULL,

    @StatisticsSample int = 100,

    @StatisticsResample nvarchar(max) = 'N',

    ok scratch this I don't know how to save the changes. Do I just execute the opened SP after the changes are made?

  • If you right-clicked the script in SSMS Object Explorer and clicked Modify then it would have scripted the proc to a new Query Window as an ALTER PROC script. After modifying that you can simply execute the entire thing to save your changes to the server. Note that you should save that script off to another location since it is now non-standard in terms of what Ola has produced. A more advisable way to do this would be to leave the proc as-is and just provide the specialized parameters when you call it:

    EXECUTE [dbo].[IndexOptimize]

    @databases = 'USER_DATABASES',

    @LogToTable = 'Y',

    @UpdateStatistics = 'ALL',

    @OnlyModifiedStatistics = 'Y',

    @StatisticsSample = 100;

    This way when Ola releases updated versions of his procs, which he periodically does, you will not need to remember to make your changes to his proc again.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • A more advisable way to do this would be to leave the proc as-is and just provide the specialized parameters when you call it:

    Perfect! I like this solution and when I did click to modify the SP it did create an ALTER PROCEDURE [dbo].[IndexOptimize]

    This is why I figured I could just execute the modified code but I do like what you have provided much better.

    Thanks again for all the help and valuable information!

  • So let me throw my weekly maintenance plan order to you and please provide any feedback.

    1. run db integrity check system db

    2. set db bulk mode

    3. run db integrity check users db

    4. set db full recovery

    5. backup db full

    6. backup db log file

    7. set db bulk mode

    8. indexOptimize with stat update

    9. run db integrity check users db

    10. set db full recovery

    I know the article says to do a db backup after the index optimize but I was always taught to backup before those major operations incase of an issue.

  • lawson2305 (9/6/2012)


    So let me throw my weekly maintenance plan order to you and please provide any feedback.

    1. run db integrity check system db

    2. set db bulk mode < why?

    3. run db integrity check users db

    4. set db full recovery

    5. backup db full

    6. backup db log file

    7. set db bulk mode

    8. indexOptimize with stat update

    9. run db integrity check users db< why again?

    10. set db full recovery

    I know the article says to do a db backup after the index optimize but I was always taught to backup before those major operations incase of an issue.

    In general the approach is a good one to keep. Index operations are not going to change the data in your database. Take a backup before if you're comfortable, but I do not think it is necessary and do not do that myself. If you're running in FULL or BULK LOGGED you can always get back to the point in time before you began the index and stats maintenance if ever needed.

    How often are you running log backups? You should take a log backup after you do your index maintenance to free the active portion of the log. Index maintenance will bloat your tran log backups (even in BULK LOGGED).

    See how this sits with you:

    1. run db integrity check system and users db

    + run full backup

    7. set db bulk mode

    8. indexOptimize with stat update

    10. set db full recovery

    + run log backup

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • the reason I was doing the integrity check twice is because I taught to run it first. Then ran it again because of this comment:

    http://ola.hallengren.com/frequently-asked-questions.html

    Scheduling:

    "I recommend that you perform the integrity check after the index maintenance because index rebuilds sometimes fix database corruption."

  • lawson2305 (9/6/2012)


    the reason I was doing the integrity check twice is because I taught to run it first. Then ran it again because of this comment:

    http://ola.hallengren.com/frequently-asked-questions.html

    Scheduling:

    "I recommend that you perform the integrity check after the index maintenance because index rebuilds sometimes fix database corruption."

    He makes good points but I do not see it that way. If there is corruption in the database then I want it to surface during the integrity check, not masked by the index maintenance job. SQL Server will not create corruption on its own. Problems usually step from something gone awry in the I/O subsystem. The earlier problems like this are detected the better.

    Taking a full backup (or differential) after the index maintenance job will not free the active portion of the log. Only a tran log backup will do that so I'll stick with my log backup after the index job instead of a full.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • ok here is where I'm settling.

    1. run db integrity check system db

    2. run db integrity check users db

    3. backup db full

    4. backup db log file (overwrite)

    5. set db bulk mode

    6. indexOptimize with stat update

    7. run db integrity check users db

    8. set db full recovery

    9. backup db log file (append)

    I do nightly log backups append and then during 8-5 during high transactions I'm doing log backups append every 15 minutes since we are doing some work currently that is really causing major transaction accumulation archive and purging inside the db. Once this is done I will probably remove the every 15 min run.

    What do you think about his weekly recommendation of running these cleanup task?

    Cleanup:

    •sp_delete_backuphistory one day a week

    •sp_purge_jobhistory one day a week

    •CommandLog Cleanup one day a week

    •Output File Cleanup one day a week

    I have never ran any cleanup task before.

  • Are you appending each backup to the same file? If so, I would change that. Each backup should go to its own file. Currently you will lose all backup files if that file were to be accidently deleted or were to become corrupt.

  • lawson2305 (9/6/2012)


    ok here is where I'm settling.

    1. run db integrity check system db

    2. run db integrity check users db

    3. backup db full

    4. backup db log file (overwrite)

    5. set db bulk mode

    6. indexOptimize with stat update

    7. run db integrity check users db

    8. set db full recovery

    9. backup db log file (append)

    I do nightly log backups append and then during 8-5 during high transactions I'm doing log backups append every 15 minutes since we are doing some work currently that is really causing major transaction accumulation archive and purging inside the db. Once this is done I will probably remove the every 15 min run.

    What do you think about his weekly recommendation of running these cleanup task?

    Cleanup:

    •sp_delete_backuphistory one day a week

    •sp_purge_jobhistory one day a week

    •CommandLog Cleanup one day a week

    •Output File Cleanup one day a week

    I have never ran any cleanup task before.

    Why run checkdb again? If there is corruption you want to stop right there and figure out what's going on. If after analysis you determine a targeted index rebuild might correct it, run that, and then run checkdb again. checkdb might be the most intense operation you'll put your database through, so if your DB is large running it twice for no benefit in the vast majority of cases, and only a small benefit in the rest of the cases, is not worth it in my opinion.

    What do you mean by 'overwrite' ? Are you in the habit of appending backups to existing files? If so, I would recommend switching to one-backup-per-file.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • I do append to the same file for ease of file cleanup later when I do the overwrite.

    as this might not be the most perfect way it works. I do understand the logic of the other method.

    We backup on multiple levels though so I'm not so concerned about this specific file. We do nightly file system backups and nightly SQL backups from Backupexec. This actual sql backup I'm performing is more to control log growth and to just have another level of protection.

  • If there is corruption you want to stop right there and figure out what's going on.

    So do you suggest I only move on to the backup and everything else if the checkdb is successful?

    This happens at 2AM in the morning no production is running during this process and it takes about 20 minutes to do the check and another 20 to do the index.

    I will also be honest as I have stated before not sure if in this topic or not. My real job is a net and sys admin. DB administration is only because I'm the only one I'm lucky if I can devote 10 minutes a week to db administration. So I do not check these logs weekly nor stay up on it. I need more of a run and have faith nothing wrong is happening.

    Which so far in my 10 years that is how sql has been treating me very well in those regards.

    I will say one concern you do raise which has been on my radar is Snapshots since we are 100% virtualized. I have set my drives that hold my sql db's to persistent so they will not snapshot but I still have concerns. My staff performs snaps once a month for update installations. This of course is another whole ball of wax.

  • If your talking about small DBs where it only takes 20 minutes for CHECKDB and you want as close to turnkey as possible, then go for it.

    I still would not overwrite your backup files though. If that's what you're going to do you might as well take a FULL backup, switch your DB into SIMPLE recovery before all the maintenance tasks, then switch back to FULL recovery and take another FULL backup and resume log backups.

    edit:

    PS VM snaps are no substitute for valid (i.e. can be restored) full + tran log backups taken from SQL Server to achieve point-in-time recovery. VMs can always serve as a recovery option in a pinch but should not be used as a first line of defense IMO. You are right about it being a whole other ball.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

Viewing 15 posts - 46 through 60 (of 61 total)

You must be logged in to reply to this topic. Login to reply