Change from Full to Simple Recovery, what to do with the Log file?

  • I’m new to doing backups and I have a situation with my server not having enough room to do backups. I didn’t set up the recovery modes on the databases initially and I want to make sure I don’t screw anything up with the database or the backups.

    I have a server with two large databases (at 15 and 50GB respectively) and I noticed one day that my backups were failing due to lack of space on the drive, I keep the last four days worth of backups and found I only had room for about two days. A bit more investigation revealed that the 15GB database had a log file of 108GB. My 50GB database has a log file of 5GB and activity is more constant on the 50GB database than the 15GB database.

    A bit more digging and I found the culprit, the 15GB database was set to Full recovery mode with no log backups being taken while the 50GB database is in Simple recovery mode. Backups are taken nightly of both databases and it is acceptable to lose up to a day’s worth of data.

    So, I changed the recovery mode to Simple on the 15GB database and made sure I had room to successfully complete my backups last night. This occurred (successful backups) and the log file is still 108GB for the 15GB database.

    In reading some of the articles on backup modes (thanks for the info, Gail), I know that the space in the 108GB log file is now marked as unused. I have a few questions on how to get the log file size reduced.

    1) If I leave things as they are, will SQL Server eventually reduce the size of the transaction log for the 15GB database? If SQL Server needs more room to do a backup, will it snag the unused space in the log file?

    2) If SQL Server doesn’t reclaim the space for other purposes, then am I correct in thinking that I will need to shrink the log file? If so, how far down do I shrink it? Shrink it to the minimum and then let it grow back as needed? Shrink it to 1GB and see how it goes?

    The 15GB database is a staging database. Chucks of data from 5000 records to a couple of million records are added, manipulated and then transferred (the number of records usually less than the initial import) to another database for processing (the 50GB one). Activity is sporadic on the 15GB one while it is constant on the 50GB database.

    -- Kit

  • Kit G (11/3/2010)


    1) If I leave things as they are, will SQL Server eventually reduce the size of the transaction log for the 15GB database? If SQL Server needs more room to do a backup, will it snag the unused space in the log file?

    No, you will have to do it.

    2) If SQL Server doesn’t reclaim the space for other purposes, then am I correct in thinking that I will need to shrink the log file? If so, how far down do I shrink it? Shrink it to the minimum and then let it grow back as needed? Shrink it to 1GB and see how it goes?

    Yes, shrink it. Personally I would shrink it to 1 GB and let it grow when needed. When it reaches the size needed to handle all concurrent transactions, it will stop growing.

    -- Gianluca Sartori

  • Kit G (11/3/2010)


    1) If I leave things as they are, will SQL Server eventually reduce the size of the transaction log for the 15GB database? If SQL Server needs more room to do a backup, will it snag the unused space in the log file?

    No.

    2) If SQL Server doesn’t reclaim the space for other purposes, then am I correct in thinking that I will need to shrink the log file? If so, how far down do I shrink it? Shrink it to the minimum and then let it grow back as needed? Shrink it to 1GB and see how it goes?

    It depends. 😀

    What I would suggest is, shrink it to the size of the data file to start (15GB), then keep an eye on the % of log used during the load periods (or during any maintenance jobs). Find what the max space is that's actually used in the log, add 10-20% for safety and then shrink the log to that

    Or, shrink it to 0 and then grow it in one or two operations to that identified size. Sorts out any excessive VLF problems that you probably have.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Sounds like a good plan to me, too.

    Regarding the smaller database, does it actually need nightly backups? If the data is as volatile as I think it is, would you ever be in a position where you want to restore to the day before, or would you be better off creating an empty copy of the database and reloading files into it?

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Now that I think about it, the bigger database could probably be cut back to weekly or even monthly full backups, and nightly diff backups. Or is a separate archive database in use for the older lists?

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GilaMonster (11/3/2010)


    Kit G (11/3/2010)


    1) If I leave things as they are, will SQL Server eventually reduce the size of the transaction log for the 15GB database? If SQL Server needs more room to do a backup, will it snag the unused space in the log file?

    No.

    Okay.

    2) If SQL Server doesn’t reclaim the space for other purposes, then am I correct in thinking that I will need to shrink the log file? If so, how far down do I shrink it? Shrink it to the minimum and then let it grow back as needed? Shrink it to 1GB and see how it goes?

    It depends. 😀

    Figures. 🙂

    What I would suggest is, shrink it to the size of the data file to start (15GB), then keep an eye on the % of log used during the load periods (or during any maintenance jobs). Find what the max space is that's actually used in the log, add 10-20% for safety and then shrink the log to that

    Or, shrink it to 0 and then grow it in one or two operations to that identified size. Sorts out any excessive VLF problems that you probably have.

    How do I find out what percentage is getting used? Is that a DBCC command or something else? I haven't dived into the various options on DBCC.

    -- Kit

  • Gianluca Sartori (11/3/2010)


    Kit G (11/3/2010)


    1) If I leave things as they are, will SQL Server eventually reduce the size of the transaction log for the 15GB database? If SQL Server needs more room to do a backup, will it snag the unused space in the log file?

    No, you will have to do it.

    2) If SQL Server doesn’t reclaim the space for other purposes, then am I correct in thinking that I will need to shrink the log file? If so, how far down do I shrink it? Shrink it to the minimum and then let it grow back as needed? Shrink it to 1GB and see how it goes?

    Yes, shrink it. Personally I would shrink it to 1 GB and let it grow when needed. When it reaches the size needed to handle all concurrent transactions, it will stop growing.

    Thanks for the info. 🙂

    -- Kit

  • Kit G (11/3/2010)


    How do I find out what percentage is getting used? Is that a DBCC command or something else? I haven't dived into the various options on DBCC.

    Yup, a DBCC. It is a (semi) documented one.

    DBCC SQLPERF(Logspace)

    DBCC SQLPERF has a LOT of possible uses, I think the logspace is the only documented one though.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GSquared (11/3/2010)


    Sounds like a good plan to me, too.

    Regarding the smaller database, does it actually need nightly backups? If the data is as volatile as I think it is, would you ever be in a position where you want to restore to the day before, or would you be better off creating an empty copy of the database and reloading files into it?

    The data is volatile, but only over the course of about 2 months. Nightly backups are probably a good idea at the moment since I'm setting up 1.5 to 3.5 million records for various data pulls that will take place over the next few weeks. The setup can take a day or three so having to go back to last week's backup would be too big an impact on using the data. Come the middle of December or so, the database will be idle until next year around September, October.

    I guess it would be possible to set up a backup that checks the month and between certain times only does a weekly backup and while the database is being used does a daily backup.

    -- Kit

  • GSquared (11/3/2010)


    Now that I think about it, the bigger database could probably be cut back to weekly or even monthly full backups, and nightly diff backups. Or is a separate archive database in use for the older lists?

    The bigger database gets data added to it each day. Would that make doing a restore on the database at the end of the month a pain? I haven't tried to do a restore with nightly diff backups, just doing a restore from a backup done in simple recovery mode.

    -- Kit

  • Kit G (11/3/2010)


    GSquared (11/3/2010)


    Now that I think about it, the bigger database could probably be cut back to weekly or even monthly full backups, and nightly diff backups. Or is a separate archive database in use for the older lists?

    The bigger database gets data added to it each day. Would that make doing a restore on the database at the end of the month a pain? I haven't tried to do a restore with nightly diff backups, just doing a restore from a backup done in simple recovery mode.

    Probably best to just do a nightly backup and not worry about it anymore.

    It's better to have a backup you don't need than to need one you don't have.

  • Kit G (11/3/2010)


    GSquared (11/3/2010)


    Now that I think about it, the bigger database could probably be cut back to weekly or even monthly full backups, and nightly diff backups. Or is a separate archive database in use for the older lists?

    The bigger database gets data added to it each day. Would that make doing a restore on the database at the end of the month a pain? I haven't tried to do a restore with nightly diff backups, just doing a restore from a backup done in simple recovery mode.

    If there's a lot of data in it that doesn't change from day to day, diff backups will probably be smaller and faster than full backups. Restoring with them is pretty simple.

    The frequency of full backups on a semi-static, large database, would depend on how long it takes before the diff backup is more of a pain than a full backup would be. They back up all the data pages that have changed since the last full backup. So, if 3/4 of the database hasn't changed since yesterday, and 1/4 has, and you did a full backup yesterday, the diff backup will only grab the 1/4 that did change.

    You can use the Restore wizard to either do a restore using a full and diff, or to generate a script for it.

    So, if you have a database where approximately 10% of its pages will be changed on a weekly basis, then a full backup and 6 diff backups is likely to take less disk space than even 2 full backups. Gives you the ability to restore to points in the last week instead of the last 2 days. You'll need to work out the approximation of how much changes per day vs how much sits for a week or more at a time, to determine if this is worth testing out.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • So, I do a full backup at 1:00pm on a Sunday. Each night, at 11:00pm, I have the Diffential backup scheduled to run.

    On Tuesday, at 3:00pm, my database goes bellyup and I have to restore. So, the restore process would start at the full backup done on Sunday at 1:00pm, then add on the backup done Sunday night and Monday night, leaving me with whatever data changes were done between 11:00 Monday night and when the database went bellyup to deal with.

    But, instead of having to store three days of full backups, I would only be storing a full backup plus whatever changes were made each day? This would definitely make things a bit easier space-wise as the database isn't in use over the weekend.

    I'll have to check this out and see if it will help with the space issues.

    By the way, I got the log file reduced in size. DBCC SQLPERF showed that 0.07% of the log file was in use (out of 105GB). It is now shrunk down to 15MG with 10% in use and I'll keep an eye on it to see how the usage goes.

    Thanks for your guidance all. Happy to not make a mess of my database cause I didn't know what I was doing. 🙂

    -- Kit

  • Michael Valentine Jones (11/3/2010)


    It's better to have a backup you don't need than to need one you don't have.

    I definitely agree on this one. If I do set up another backup method, I'm going to test it first before I retire the other method. No need to have Gail answer "You don't" in response to a corrupt database with no backups. 😀

    -- Kit

  • Differentials are cumulative, so you only restore the most recent diff. T-Logs are additive, so you restore them all, or up to the desired point in time, since the most recent diff.

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply