Backup and log

  • Hello,
    since 2 years, I manage a database with a lot of transactions
    Recovery mode is Full
    Full backup once a week  (backup database)
    Diff backup once a day (backup database)
    Transaction log backup every hour (backup log)

    the mdf file size is : 23 gb
    le ldf file size is : 35 gb   Autogrow = true,  Filesize grow = 10%  Maximum file size = 2,097,152 (which is not far from unlimited)

    the ldf file is on it's own hard drive,  the size of the hard drive is 40 gb.

    So what happens here?  The ldf file always grow and will reach my hard drive capacity soon.

    WHat should I do? What's the best pratice with ldf file?    

    thanks

  • Hi mdube,

    First, I would change your Autogrow to be 64 MB.  You don't want to be growing at 3.5 GB on your next autogrow because that alone can cause some downtime.  

    Second: Run the following and let us know the value for Log Space Used (%). 
    DBCC SQLPERF(LOGSPACE)

    A few thoughts that come to mind.  If the .ldf file is full, then maybe your weekly FULL backup is being done WITH COPY, therefore you never actually never are actually marking the database as being backed up with FULL.  If the .ldf file is mostly empty, then you probably missed a few weeks of transaction log backups and during that time the .ldf file grew  (if this is the case, you just need to shrink the .ldf file).

  • Thanks for your help,  I really need some help.  I really appreciate your reply.

    DBCC SQLPERF(LOGSPACE)
    Result :
    Log Size(MB): 34077.49
    Log space used (%): 4.790907
    Status: 0

    Autogrow has been changed to 64 Mb

  • mdube,

    The good news is that your log file has plenty of space and it shouldn't grow anytime soon. 

    What to do:
    You need to shrink the log file and then keep a close eye on the log size.  The next time it grows a lot, you want to be able to relate it to something.  For me, I have seen this in the past for two main reasons.  Either a transaction backup job hasn't run for a few days causing the log to have to hold a few days of transactions.  Or, a large batch job ran (such as reindexing a big table or loading a big table).

    I googled "How to shrink a log file in SQL Server" and found this link: https://www.mssqltips.com/sqlservertutorial/3311/how-to-shrink-the-transaction-log/
    It worth reading.

    One more thing and this gets confusion.  Log file are strange because they are full of virtual log files.  When you shrink the log file, it shrinks down to the last virtual log file that has data in it.  Then it fills it up with zeros, causing the next write to be at the beginning of the log file.  After the next transaction backup, the last virtual log file data gets backed up and released.  Then the next shrink process will release even more space. 
    To summarize the steps: Shrink the log file (shrinking some and filling last virtual log), do a transaction backup, and then shrink log file again (to shrink the rest of the log file).

    Best of luck

  • Thanks again, y

    I shrunk the log file with Management Studio as shown in the link you provide.  It goes down a bit

    DBCC SQLPERF(LOGSPACE)
    Result :
    Log Size(MB): 25602.87
    Log space used (%): 9.106944

    Then I did a Transaction log backup, and shrunk again.
    DBCC SQLPERF(LOGSPACE)
    Result :
    Log Size(MB): 1.039063
    Log space used (%): 49.06015

    So now that's good thanks

    Will it grow up again?  Pretty sure it will. Maybe because of the Index rebuild I do once  a week?

  • It will grow again because you set it so small (1 MB).  When it grows, there is a performance hit.  In a perfect scenario, you'll have the size be just big enough for it's largest transaction.  I would watch it closely over the next couple of weeks.  Don't shrink it on a regular basis, because then you force it to grow again for normal operations.  Only shrink it when the log file grew for a known unusual event (such as the transaction log backup failing for a couple of weeks). 

    Do not shrink it because of a index rebuild, because it will need the same space the next time you rebuild the index. This is a common mistake people make.  Index rebuilds are normal procedures so you need space for them.  If you shrink it after every reindex, then you are forcing it to work harder on the next reindex because it has to grow during the process.

    Monitoring the log size is now a new part of your job.  Congrats.

  • Thank you very much

  • OK I have found what makes the log grow; it's our weekly index optimization job.  Before the job the log was 3gb  after the job the log is 39gb  (with 99% free space)
    DBCC SQLPERF(LOGSPACE)
    Result :
    Log Size(MB): 38584.8
    Log space used (%): 0.06019614

    why the log is growing so much and is in fact almost empty (99% free space)
    Should I shrink the log file after the job execution?

    the index optimization job I use is the one found here :
    https://ola.hallengren.com/downloads.html
    IndexOptimize.sql: Stored procedure to rebuild and reorganize indexes and update statistics

    thanks for your help

    MArtin

  • mdube 62615 - Tuesday, February 5, 2019 5:51 AM

    OK I have found what makes the log grow; it's our weekly index optimization job.  Before the job the log was 3gb  after the job the log is 39gb  (with 99% free space)
    DBCC SQLPERF(LOGSPACE)
    Result :
    Log Size(MB): 38584.8
    Log space used (%): 0.06019614

    why the log is growing so much and is in fact almost empty (99% free space)
    Should I shrink the log file after the job execution?

    the index optimization job I use is the one found here :
    https://ola.hallengren.com/downloads.html
    IndexOptimize.sql: Stored procedure to rebuild and reorganize indexes and update statistics

    thanks for your help

    MArtin

    What is the size of your largest index (Clustered or otherwise)?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Tuesday, February 5, 2019 8:50 AM

    What is the size of your largest index (Clustered or otherwise)?

    IndexName                              IndexSizeKB
    IX_INSPC_DATA_DEFNT_FILE    7471048
    IX_INSPC_DATA_DEFNT_FIELD    5768896
    PK_INSPC_ATTACHMENT    2152952
    IX_ID_SD_INSPC_CATGR    1599696
    IX_ID_SD_INSPC_COMPS    1597400
    PK_INSPC_DATA_SPECF    1364472
    IX_ID_FILE_STATEMENT    1348416
    IX_ID_DATA_GROUP_MULTI    1345040
    PK_INSPECTION    501768
    PK_IMPORT_LOGS_REJECT    206432

  • mdube 62615 - Tuesday, February 5, 2019 9:11 AM

    Jeff Moden - Tuesday, February 5, 2019 8:50 AM

    What is the size of your largest index (Clustered or otherwise)?

    IndexName                              IndexSizeKB
    IX_INSPC_DATA_DEFNT_FILE    7471048
    IX_INSPC_DATA_DEFNT_FIELD    5768896
    PK_INSPC_ATTACHMENT    2152952
    IX_ID_SD_INSPC_CATGR    1599696
    IX_ID_SD_INSPC_COMPS    1597400
    PK_INSPC_DATA_SPECF    1364472
    IX_ID_FILE_STATEMENT    1348416
    IX_ID_DATA_GROUP_MULTI    1345040
    PK_INSPECTION    501768
    PK_IMPORT_LOGS_REJECT    206432

    Ok... from the numbers above and what you said about index maintenance, it sounds like you're using the supposed "Best Practice" of doing nothing at <  10% fragmentation, REORGANIZEing between 10 and 30% fragmentation, and REBUILDing at > 30%. 

    With that in mind, let me first recommend that, unless you have LOBs in indexes that are in good need of being compressed, stop using REORGANIZE.  It's fully logged and it's not the tame, little, lightweight beasty that people claim it is.  I know you don't have a Clustered Index as large as the following, but similarities exist in smaller tables.  I did a REORGANIZE on a 146 Gigabyte Clustered Index that was only 12% fragmented (logical fragmentation).  It took an hour and 21 minutes and caused my nice little 20GB log file to EXPLODE to 227 Gigabytes and it didn't fix all of the fragmentation (either logical or page density wise).  You can just imagine the effect that had on the backups I did after that.  Thankfully, that was on a test system.

    Then, I restored the database from the same backup I made the original test database from which, of course, restored everything to the same conditions before the REORGANIZE, including having only a 20GB log file after which I did a REBUILD using the BULK-LOGGED recovery model. It completed in a little over 12 minutes and the log file only grew to 37GB.  The backups were much more reasonable, as well.  As second test showed that something else was running while I did my test and the log file wouldn't have grown beyond the original 20GB.

    I also did the Rebuild test in the full recovery model.  As expected, it grew to 146GB but that's a good amount less than the 227GB that REORGANIZE killed me with.

    There's also the "DBA Drive" trick.  I've got a great relationship with the Infrastructure folks and they've given me my own "DBA Only" drive.  I store nothing permanent on the drive but I use it a lot.  They also have made it so that I don't need to stay in the Full Recovery Model for mirroring, DR replication, or anything else.  I can slip back and forth between BULK-LOGGED and the FULL Recovery Models without a concern.  With that, I not only change to BULK-LOGGED when doing Index Maintenance (seriously reduces the time and effect on the log file) but I also do a CREATE INDEX WITH DROP EXISTING on whatever the largest index that I need to rebuild is to temporarily move it to a temporary File Group that I build on the special "DBA Only" drive.  That frees up all sorts of space on the MDF file to do Rebuilds on all of the rest of the indexes, which are smaller than the one I moved out, without growing the MDF file (it also helps to rebuild those indexes starting with the smallest first... makes for more contiguous free space to get larger fragment sizes).  When I'm all done with those, I do another CREATE INDEX WITH DROP EXISTING to move the original largest index back onto the primary MDF so I don't end up growing the MDF file at all.  Then I drop the now clear temporary File and File Group and I'm done.

    And, just to reiterate and emphasize, REORGANIZE sucks (I have a 2 hour presentation on that and other related things).  Unless I absolutely have to use it to compact LOBs (which I also force to be out of row like the old days), I don't use REORGANIZE as a part of my general index maintenance.

    The bottom line is that my log file stays nice and small and I don't have a shedload of unnecessary free space in my MDF file.  And, using the BULK-LOGGED trick, I normally (except for indexes that measure in the hundreds of Gigabytes) don't have to use WITH ONLINE = ON for the rebuilds, which makes the rebuilds even fast and does a better job at defragging, too boot.

    Not all of these tricks are possible in every environment but I will tell you that it's usually better to wait for a maintenance window to do a REBUILD than it is to do a REORGANIZE.  In fact, for GUIDs and other evenly distributed indexes, it's actually better to not do index maintenance than it is to do a REORGANIZE.  Again, I have the proof on that but it's a 2 hour session, which is much to long to post here.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • p.s.  If you can take advantage of some of the tricks I talked about including using ONLINE = OFF, your 7.5GB index should take about 43 seconds per REBUILD (and only 43 seconds one time if you do it in-place instead of doing the DBA disk trick) and you'll be much better off than if you use REORGANIZE.  The ONLY exception is when you have LOBs to compress which, for some reason, they made so that REBUILD won't do LOB compression.  As an additional benefit, you won't have to rebuild stats on the indexes that you've rebuilt.  You still need to rebuild stats that need it for other indexes and for column stats (which need to be limited to essentials).

    Also, if you do/can use the BULK-LOGGED trick during Index Maintenance (kills AG and anything else that relies on the log file and I don't have that problem), take the time to do a T-Log backup before switching to BULK-LOGGED, don't forget to go back to the FULL Recovery Model when done, and take another T-Log backup as soon as you do to minimize the time you have where a T-LOG backup has "Minimally Logged" actions in it.  You can't restore to the middle of such a log file.  You have to either use it all or stop before you get to it.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff,  your knowledge on the subjet are impressive.  Thanks a lot for your time, very instructive but difficult for me to know what I should do now!

    the index optimization job I use is the one found here :
    https://ola.hallengren.com/downloads.html   IndexOptimize.sql: Stored procedure to rebuild and reorganize indexes and update statistics

    I execute it like this :
    EXECUTE IndexOptimize @databases = 'PROD_RPA',
    @FragmentationLow = NULL,
    @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_OFFLINE',
    @FragmentationHigh = 'INDEX_REBUILD_OFFLINE',
    @FragmentationLevel1 = 5,
    @FragmentationLevel2 = 25,
    @UpdateStatistics = 'ALL',
    @OnlyModifiedStatistics = 'Y' ;

    Can't say for sure that it's the good way,  I am not the one who set this up.   SHould I still use this job?   I am using the right parameters?   Or can you share another job that would be more effective than the one i use actually?
    thanks

    Martin

  • mdube 62615 - Thursday, February 7, 2019 9:11 AM

    Jeff,  your knowledge on the subjet are impressive.  Thanks a lot for your time, very instructive but difficult for me to know what I should do now!

    the index optimization job I use is the one found here :
    https://ola.hallengren.com/downloads.html   IndexOptimize.sql: Stored procedure to rebuild and reorganize indexes and update statistics

    I execute it like this :
    EXECUTE IndexOptimize @databases = 'PROD_RPA',
    @FragmentationLow = NULL,
    @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_OFFLINE',
    @FragmentationHigh = 'INDEX_REBUILD_OFFLINE',
    @FragmentationLevel1 = 5,
    @FragmentationLevel2 = 25,
    @UpdateStatistics = 'ALL',
    @OnlyModifiedStatistics = 'Y' ;

    Can't say for sure that it's the good way,  I am not the one who set this up.   SHould I still use this job?   I am using the right parameters?   Or can you share another job that would be more effective than the one i use actually?
    thanks

    Martin

    Ola's stuff is pure gold and will definitely make your life easier.  The key is that you need to do some analysis based on what I talked about to keep your log file from exploding.

    There's also the subject of analyzing the indexes for the type of Insert and Update Patterns they suffer.  That can take a while because the outcome is setting a correct Fill Factor and possibly finding and fixing the patterns (especially anything that has "ExpAnsive Updates").

    I guess my start would be to not use REORGANIZE unless you need to compact LOBs.  I'd also determine if you can make the temporary change into the BULK-LOGGED Recovery Model while you're doing REBUILDs.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply