Shrinking the Log file...best practices?

  • Ed Wagner (10/26/2016)


    ...Taking regular log backups will cause the backed up portion of the log to be available for re-use, which will negate the goal of the exercise...

    Well, you could just sum the size of the backed up log in that case, so I don't know that it would really hurt anything.

    If actually protecting the data with log backups during the experiment is not needed, that might even be preferable, since you could do all the log backups to 'NUL', avoiding unnecessary use of drive space, and just sum the size of those log backups as recorded in the backupset table.

    Cheers!

  • Ed Wagner (10/26/2016)


    HanShi (10/26/2016)


    Jeff Moden (10/26/2016)


    p.s. As for ascertaining the recommended size for the log files, you could write a script to (for each DB)...

    1. Set the SIMPLE recovery model.

    2. Change the initial size and growth of each log file to 100MB each.

    3. Shrink the log file to 0 (don't worry about pregrowing it here, it'll grow)

    4. Set the FULL recovery model.

    5. Wait for a week.

    After a week passes, the log files will have grown because of the FULL recovery model. ...<snip>

    This could blow the size (and fill up the filesystem) quickly on a busy system!! Unless you perform regular LOG backups (which is not mentioned in the above steps ;-)). Regular LOG backups determine the final size of the LOG backup, depending on the frequency of the backups.

    I would skip step 4 and leave the database in SIMPLE recovery. After a week I would determine the size as to the size after a week plus an additional 20% or 30%.

    I believe Jeff's goal (other than getting to the "drink beer" step) was to ascertain how much log space is consumed during a normal period of time - one week in this case. Taking regular log backups will cause the backed up portion of the log to be available for re-use, which will negate the goal of the exercise.

    I fully agree that having regularly-scheduled log backups is essential for any database in the full recovery model. However, it would eliminate the ability to see how much log space is actually used. This would become the size to use for setting the new initial log file size when the exercise is over.

    I understand the goal Jeff is after, but leaving the LOG grow for a whole week (in FULL recovery model) is IMO a large overkill with potential disaster on disk space. When the database is set back to SIMPLE the LOG only needs to hold the longest/largest transaction including all concurrent actions. It is very unlikely the system will have a transaction with a duration close to a week :w00t:

    ** Don't mistake the ‘stupidity of the crowd’ for the ‘wisdom of the group’! **
  • So if I understand things correctly, if I have these set to Simple mode, then the size of the trans log is really not all that important since we will never be restoring it. So If I shrink these down to say 50MB (or whatever) they will only grow as large as they need to between backups to do minimal logging required for crash recovery and uncommitted transactions -- that kind of thing?

  • Man, I gotta read the Managing Transaction Logs[/url] eBook! It's a complicated subject!

  • Siberian Khatru (10/26/2016)


    So if I understand things correctly, if I have these set to Simple mode, then the size of the trans log is really not all that important since we will never be restoring it. So If I shrink these down to say 50MB (or whatever) they will only grow as large as they need to between backups to do minimal logging required for crash recovery and uncommitted transactions -- that kind of thing?

    Close.

    The size will still matter when you need to restore, because the restore process will create a log file the same size as it was when it was backed up (the size on disk, NOT the space used in the log file.) So if, somehow, one of your Simple recovery DB log files grew to 1TB, and you needed to restore a backup of that DB somewhere, you would need 1TB of space for the log file.

    Also bear in mind, unless you turn on the Auto-shrink option for your databases (please don't do that) your log file will never get smaller on its' own.

    Beyond that, yes, you've got the gist of it.

  • jasona.work (10/26/2016)


    Siberian Khatru (10/26/2016)


    So if I understand things correctly, if I have these set to Simple mode, then the size of the trans log is really not all that important since we will never be restoring it. So If I shrink these down to say 50MB (or whatever) they will only grow as large as they need to between backups to do minimal logging required for crash recovery and uncommitted transactions -- that kind of thing?

    Close.

    The size will still matter when you need to restore, because the restore process will create a log file the same size as it was when it was backed up (the size on disk, NOT the space used in the log file.) So if, somehow, one of your Simple recovery DB log files grew to 1TB, and you needed to restore a backup of that DB somewhere, you would need 1TB of space for the log file.

    Also bear in mind, unless you turn on the Auto-shrink option for your databases (please don't do that) your log file will never get smaller on its' own.

    Beyond that, yes, you've got the gist of it.

    Also the log will need enough space to accommodate the active transactions at any given time so even you're auto shrinking it can potentially grow again, and it will error out if you cap the log size and a single transaction requires more space.

  • HanShi (10/26/2016)


    I understand the goal Jeff is after, but leaving the LOG grow for a whole week (in FULL recovery model) is IMO a large overkill with potential disaster on disk space.

    Certainly could be and normally would be except for one thing... I read what the OP posted. The system had gone for "years" without any log file reduction. It's not likely that a week would pose a problem for this particular scenario.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • HanShi (10/26/2016)


    Jeff Moden (10/26/2016)


    p.s. As for ascertaining the recommended size for the log files, you could write a script to (for each DB)...

    1. Set the SIMPLE recovery model.

    2. Change the initial size and growth of each log file to 100MB each.

    3. Shrink the log file to 0 (don't worry about pregrowing it here, it'll grow)

    4. Set the FULL recovery model.

    5. Wait for a week.

    After a week passes, the log files will have grown because of the FULL recovery model. ...<snip>

    This could blow the size (and fill up the filesystem) quickly on a busy system!! Unless you perform regular LOG backups (which is not mentioned in the above steps ;-)). Regular LOG backups determine the final size of the LOG backup, depending on the frequency of the backups.

    I would skip step 4 and leave the database in SIMPLE recovery. After a week I would determine the size as to the size after a week plus an additional 20% or 30%.

    Again, read what the OP wrote. The system had gone for years without any truncation of the log files. A week isn't going to be a bother here especially after we shrunk the years of log file to nothing.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (10/26/2016)


    HanShi (10/26/2016)


    Jeff Moden (10/26/2016)


    p.s. As for ascertaining the recommended size for the log files, you could write a script to (for each DB)...

    1. Set the SIMPLE recovery model.

    2. Change the initial size and growth of each log file to 100MB each.

    3. Shrink the log file to 0 (don't worry about pregrowing it here, it'll grow)

    4. Set the FULL recovery model.

    5. Wait for a week.

    After a week passes, the log files will have grown because of the FULL recovery model. ...<snip>

    This could blow the size (and fill up the filesystem) quickly on a busy system!! Unless you perform regular LOG backups (which is not mentioned in the above steps ;-)). Regular LOG backups determine the final size of the LOG backup, depending on the frequency of the backups.

    I would skip step 4 and leave the database in SIMPLE recovery. After a week I would determine the size as to the size after a week plus an additional 20% or 30%.

    Again, read what the OP wrote. The system had gone for years without any truncation of the log files. A week isn't going to be a bother here especially after we shrunk the years of log file to nothing.

    Gee, it sounds so bad when you say years like that.

    Wait, that IS bad, lol. I sat and watched and waited and sure enough - they eventually asked me what I thought. Better late than never I guess.

  • Jeff Moden (10/26/2016)


    HanShi (10/26/2016)


    I understand the goal Jeff is after, but leaving the LOG grow for a whole week (in FULL recovery model) is IMO a large overkill with potential disaster on disk space.

    Certainly could be and normally would be except for one thing... I read what the OP posted. The system had gone for "years" without any log file reduction. It's not likely that a week would pose a problem for this particular scenario.

    Touche, Jeff.

    If the logs are truncated to zero, then think of all the space that's been reclaimed. Unless there's a very drastic change in activity, then a week's worth of log growth isn't going to touch the free space gained back. In fact, I'm curious just how big those log files are at this point.

  • jasona.work (10/26/2016)


    Siberian Khatru (10/26/2016)


    So if I understand things correctly, if I have these set to Simple mode, then the size of the trans log is really not all that important since we will never be restoring it. So If I shrink these down to say 50MB (or whatever) they will only grow as large as they need to between backups to do minimal logging required for crash recovery and uncommitted transactions -- that kind of thing?

    Close.

    The size will still matter when you need to restore, because the restore process will create a log file the same size as it was when it was backed up (the size on disk, NOT the space used in the log file.) So if, somehow, one of your Simple recovery DB log files grew to 1TB, and you needed to restore a backup of that DB somewhere, you would need 1TB of space for the log file.

    Also bear in mind, unless you turn on the Auto-shrink option for your databases (please don't do that) your log file will never get smaller on its' own.

    Beyond that, yes, you've got the gist of it.

    'Zactly. And, it would take quite some time because the VLFs need to be formatted into the space created, much like reformatting a disk for DOS but slower.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks all for the excellent advice and pointers. I am slowly altering all these databases to be of the SIMPLE recovery model and shrinking the LOG files to 50mb on those that have grown into the multiple GB range. There is plenty of room with still about half the total space available and I'm shrinking them only to (hopefully) streamline the external backup processes. This seems a fair trade off in space savings and I'll be monitoring the log files for any signs of unusual growth. Anything is an improvement over what I got handed.

    For the truly SharePoint savvy among us here, are there any specific SharePoint databases that require special attention or handling? Most of these are named "WSS_Content_<some name>" which seem to correspond to the various SP sites, but there are others with names like "Search_Service_<some name>" and "User_Profile_<some name>" and still others with more cryptic names so before I mess with those, I just want to make sure (if I can) that there isn't some arcane unwritten rule about certain SharePoint generated databases. It would be my guess that a database is a database is a database, but I've been wrong before -- and so I ask the experts here. Any worries, or just get on with it?

  • Siberian Khatru (10/28/2016)


    Thanks all for the excellent advice and pointers. I am slowly altering all these databases to be of the SIMPLE recovery model and shrinking the LOG files to 50mb on those that have grown into the multiple GB range. There is plenty of room with still about half the total space available and I'm shrinking them only to (hopefully) streamline the external backup processes. This seems a fair trade off in space savings and I'll be monitoring the log files for any signs of unusual growth. Anything is an improvement over what I got handed.

    For the truly SharePoint savvy among us here, are there any specific SharePoint databases that require special attention or handling? Most of these are named "WSS_Content_<some name>" which seem to correspond to the various SP sites, but there are others with names like "Search_Service_<some name>" and "User_Profile_<some name>" and still others with more cryptic names so before I mess with those, I just want to make sure (if I can) that there isn't some arcane unwritten rule about certain SharePoint generated databases. It would be my guess that a database is a database is a database, but I've been wrong before -- and so I ask the experts here. Any worries, or just get on with it?

    Just to be clear, shrinking the log files won't steamline the BACKUP process. It will streamline a RESTORE process, if ever needed.

    Can't help with your SharePoint questions. We're in the process of getting rid of it. 😛

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (10/28/2016)


    Siberian Khatru (10/28/2016)


    Thanks all for the excellent advice and pointers. I am slowly altering all these databases to be of the SIMPLE recovery model and shrinking the LOG files to 50mb on those that have grown into the multiple GB range. There is plenty of room with still about half the total space available and I'm shrinking them only to (hopefully) streamline the external backup processes. This seems a fair trade off in space savings and I'll be monitoring the log files for any signs of unusual growth. Anything is an improvement over what I got handed.

    For the truly SharePoint savvy among us here, are there any specific SharePoint databases that require special attention or handling? Most of these are named "WSS_Content_<some name>" which seem to correspond to the various SP sites, but there are others with names like "Search_Service_<some name>" and "User_Profile_<some name>" and still others with more cryptic names so before I mess with those, I just want to make sure (if I can) that there isn't some arcane unwritten rule about certain SharePoint generated databases. It would be my guess that a database is a database is a database, but I've been wrong before -- and so I ask the experts here. Any worries, or just get on with it?

    Just to be clear, shrinking the log files won't steamline the BACKUP process. It will streamline a RESTORE process, if ever needed.

    Can't help with your SharePoint questions. We're in the process of getting rid of it. 😛

    SharePoint is an overblown POS in my humble opinion. However, it is my little cross to bear here lol.

    By backup process, I meant the auxiliary AppAssure backup process that catches the directories and such as well as SQL backups it takes somehow. Still, shrinking these behemoths down to a more pedestrian size has got to be an improvement.

    Thanks for all your help Jeff, much obliged.

Viewing 14 posts - 16 through 28 (of 28 total)

You must be logged in to reply to this topic. Login to reply