Big transaction log file

  • Hi,

    I found a sql server database with a transaction log file of 65 GB.

    The database is configured with the recovery model option = full.

    Also, I noticed than since the database exist, they only took database backup.

    No transaction log backup were executed.

    Now, the "65 GB transaction log file" use more than 70% of the disk space.

    Which scenario do you recommend?

    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.

    2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.

    Does the " 65 GB file shrink" operation would have impact on my database users ?

    regards

    Cygne

  • The sql server version is 2008 sp2 (10.0.4000)

  • You should set up transaction log backups to run "frequently". How frequently would depend on the type of system you have running and what level of point-in-time recovery option you need. Example, I back our TLOGS every 5 minutes, while others may get away with hourly in an OLAP-type environment.

    Hard to say if running the TLOG backup will impact users without knowing whether or not the LDF file(s) is on the same set of disks as the other database files. I would make a "guess" and say that it shouldn't impact other users.

    Once you run your first TLOG backup, do a quick check to determine your largest clustered index size, and resize your log file to be 1.5 times the size of that (this will allow for future log file growth and will allow for maintenance tasks like rebuilding/reorganizing indexes, etc) without giving you too much heartache.

    Option 1 should be ok...

    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file downt o an acceptable size, schedule transaction log backup each hour.

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • cygne17_2 77208 (2/18/2014)


    Hi,

    I found a sql server database with a transaction log file of 65 GB.

    The database is configured with the recovery model option = full.

    Also, I noticed than since the database exist, they only took database backup.

    No transaction log backup were executed.

    Now, the "65 GB transaction log file" use more than 70% of the disk space.

    Which scenario do you recommend?

    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.

    2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.

    Neither.

    1. Do a full backup of the database.

    2. Set the recovery model to SIMPLE.

    3. Shrink the transaction log file to ZERO.

    4. Regrow the transaction log file to the expected max size you'll need to ensure you don't have a bazillion VLF's.

    5. Make sure the auto-growth is set to something reasonable like 1,000MB (1GB) or whatever you are comfortable with with respect to VLFs. Whatever you do, don't use a percentage to control growth.

    6. Set the database back to full recovery.

    7. Take a DIF backup to re-establish the log chain.

    8. Enable the POINT-IN-TIME (PIT) log file backups with the idea that the time between log file backups should be less than the max number of minutes of data you can affort to lose.

    9. Verify that the PIT backups are working correctly.

    10. Drink Beer.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • The real answer is "It Depends"

    How important is the data to your company/client? Do they have an RTO/RPO specified? In other words, what is the maximum allowable amount of data that can be afforded to be lost (15 minutes, 20 days)?

    Not every database needs to be in full recovery. Most probably should be in full recovery model. If you need the database in full recovery, then follow the steps that Jeff outlined. If simple is sufficient enough based on the importance of the data and the RPO objectives - then put the database in simple and take your regular full backups.

    There are plenty of cases where a daily full with 4x daily diff backups is plenty sufficient for the data being stored and the business objectives. It's up to the DBA to understand those objectives and educate the decision makers so they can make the best decision in regards to the data and policy.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • What we can ALL AGREE upon is whichever route you take, please DO NOT SKIP STEP #10

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • SQLRNNR (2/18/2014)


    The real answer is "It Depends"

    How important is the data to your company/client? Do they have an RTO/RPO specified? In other words, what is the maximum allowable amount of data that can be afforded to be lost (15 minutes, 20 days)?

    Not every database needs to be in full recovery. Most probably should be in full recovery model. If you need the database in full recovery, then follow the steps that Jeff outlined. If simple is sufficient enough based on the importance of the data and the RPO objectives - then put the database in simple and take your regular full backups.

    There are plenty of cases where a daily full with 4x daily diff backups is plenty sufficient for the data being stored and the business objectives. It's up to the DBA to understand those objectives and educate the decision makers so they can make the best decision in regards to the data and policy.

    Hmmmm.... since DIFs are cumulative since the last backup, I wonder if 4X Difs would actually take more space than non-cumulative log file backups. I'll have to give that a try.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (2/18/2014)


    SQLRNNR (2/18/2014)


    The real answer is "It Depends"

    How important is the data to your company/client? Do they have an RTO/RPO specified? In other words, what is the maximum allowable amount of data that can be afforded to be lost (15 minutes, 20 days)?

    Not every database needs to be in full recovery. Most probably should be in full recovery model. If you need the database in full recovery, then follow the steps that Jeff outlined. If simple is sufficient enough based on the importance of the data and the RPO objectives - then put the database in simple and take your regular full backups.

    There are plenty of cases where a daily full with 4x daily diff backups is plenty sufficient for the data being stored and the business objectives. It's up to the DBA to understand those objectives and educate the decision makers so they can make the best decision in regards to the data and policy.

    Hmmmm.... since DIFs are cumulative since the last backup, I wonder if 4X Difs would actually take more space than non-cumulative log file backups. I'll have to give that a try.

    The 4x diffs will often lead to more space. Sometimes it won't. I have seen it both ways. 😀

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • log file" use more than 70% of the disk space.

    Which scenario do you recommend?

    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.

    2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.

    Does the " 65 GB file shrink" operation would have impact on my database users ?

    regards

    Cygne

    [/quote]

    I would agree with SQLRNNR, the real answer is, it depends if you need point in time recovery or not.

    So, check with your app team and make sure what's the correct rpo/RTO...then based on that create your backup policy.

    If no point in time needed, and if the db is small, then do a daily full backup and change the recovery mode to simple also.

    Regards,
    SQLisAwe5oMe.

  • MyDoggieJessie (2/18/2014)


    What we can ALL AGREE upon is whichever route you take, please DO NOT SKIP STEP #10

    Heh... I'll drink to that! 🙂

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • cygne17_2 77208 (2/18/2014)


    Which scenario do you recommend?

    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.

    2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.

    It depends.

    First identify the data loss allowance for that DB (RPO). That will determine whether that DB needs full recovery and log backups or simple recovery and just full backups.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I agree with Jeff, but would put Step 10 as Step1; that way when things go wrong it won't seem so bad! 😀

    Regards

    Lempster

  • Hi all,

    Thanks for all your answer.

    Very helpful !

    regards

    Cygne

  • cygne17_2 77208 (2/19/2014)


    Hi all,

    Thanks for all your answer.

    Very helpful !

    regards

    Cygne

    You're welcome

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Jeff Moden (2/18/2014)


    cygne17_2 77208 (2/18/2014)


    Hi,

    I found a sql server database with a transaction log file of 65 GB.

    The database is configured with the recovery model option = full.

    Also, I noticed than since the database exist, they only took database backup.

    No transaction log backup were executed.

    Now, the "65 GB transaction log file" use more than 70% of the disk space.

    Which scenario do you recommend?

    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.

    2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.

    Neither.

    1. Do a full backup of the database.

    2. Set the recovery model to SIMPLE.

    3. Shrink the transaction log file to ZERO.

    4. Regrow the transaction log file to the expected max size you'll need to ensure you don't have a bazillion VLF's.

    5. Make sure the auto-growth is set to something reasonable like 1,000MB (1GB) or whatever you are comfortable with with respect to VLFs. Whatever you do, don't use a percentage to control growth.

    6. Set the database back to full recovery.

    7. Take a DIF backup to re-establish the log chain.

    8. Enable the POINT-IN-TIME (PIT) log file backups with the idea that the time between log file backups should be less than the max number of minutes of data you can affort to lose.

    9. Verify that the PIT backups are working correctly.

    10. Drink Beer.

    I'm all in favor in Step 10, even if you haven't completed all steps 1-9 😀

    Kurt

    Kurt W. Zimmerman
    SR DBA
    Lefrak Organization
    New York, NY

    http://www.linkedin.com/in/kurtwzimmerman

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply