Backups

  • There is a main process that inserts data to a database every 20 mins. In this case, what is the ideal transaction log backup strategy. Should it be every 30 mins or 15 mins? Also, we are going with a nightly full backup.

    Thanks.

  • The answer to that is it depends. How much other transaction volume do you experience?

    Personally without more info, I would lean towards every 15 minutes.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • CirquedeSQLeil (3/17/2010)


    The answer to that is it depends. How much other transaction volume do you experience?

    Personally without more info, I would lean towards every 15 minutes.

    Agreed. Way to little information provided to give a good answer.

    How big is the database? What are requirements for max allowable data loss? How much storage space do you have available for backup storage?

    Once you get all your requirements hashed out, your question will probably answer itself.

    The Redneck DBA

  • Jason Shadonix (3/17/2010)


    Once you get all your requirements hashed out, your question will probably answer itself.

    Agreed

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • CirquedeSQLeil (3/17/2010)


    The answer to that is it depends. How much other transaction volume do you experience?

    Personally without more info, I would lean towards every 15 minutes.

    Even I am tempted to a T-Log every 15 mins and the data loss allowed is 30 mins.

    The database is not very big, with a size of 20 GB and 50 - 100MB data input per day.

    What does the rule of thumb say, backup timing should be less than the allowed data loss time?? If the answer is yes, then I guess I figured the answer to my OP would be 'It depends' as anything from 10 to 30 mins for a T-Log would serve the purpose.

    Thanks..

  • If you're allowed data loss is 30 minutes, then either amount of time would be fine.

    With 15 minutes, less data loss = better. The backup time should be negligible and the amount of data being inserted is not much either.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • CirquedeSQLeil (3/17/2010)


    If you're allowed data loss is 30 minutes, then either amount of time would be fine.

    With 15 minutes, less data loss = better. The backup time should be negligible and the amount of data being inserted is not much either.

    Yeah, the backup time is negligible, but the amount of data being input varies from time to time. I will go with a 15 mins backup strategy.

  • I have couple of questions now..

    1. How much overhead would it add if I include a verify backup when finished, is there a way to test it out.

    2. With a full backup daily, would I just Append the t-logs to the existing media set or overwrite all existing media sets?

  • T_SqueaKquel (3/17/2010)


    1. How much overhead would it add if I include a verify backup when finished, is there a way to test it out.

    Put it this way. Would you rather find out that a backup is damaged and not usable at the time you take the backup, or when you need to restore using that backup?

    2. With a full backup daily, would I just Append the t-logs to the existing media set or overwrite all existing media sets?

    Neither. Each log backup to its own file.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • There verify typically isn't much time added. Not sure how much, but you could easily test this on your system.

    In terms of logs, I'm with Gail. Each log file to it's own, new file.

  • GilaMonster (3/17/2010)


    T_SqueaKquel (3/17/2010)


    1. How much overhead would it add if I include a verify backup when finished, is there a way to test it out.

    Put it this way. Would you rather find out that a backup is damaged and not usable at the time you take the backup, or when you need to restore using that backup?

    I would want it to be checked while its being backed up. That's why I plan to do a checkdb before the full back runs, so that I may go back to the backup in case I need to do a restore and be sure that my backup is valid and corruption free. So, now I will include this step for the t-logs too.

    Thanks.

  • No, you don't want to run a checkDB every time you back the log up. That's just insane. If you're backing up every 15 min, CheckDB can easily take longer than the interval between the backups.

    Besides, in full recovery, corruption in the data file can't get into the log backups.

    A database can be fine and the backup taken of it unusable. It happens, it means the backup itself was damaged or not written properly. That's why you should backup with the checksum option and do restore verifyonly after doing the backup to verify that nothing went wrong during the backup.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (3/17/2010)


    No, you don't want to run a checkDB every time you back the log up. That's just insane. If you're backing up every 15 min, CheckDB can easily take longer than the interval between the backups.

    Besides, in full recovery, corruption in the data file can't get into the log backups.

    Oh yeah, I was only gonna run the checkDB before the nightly full backup.

    A database can be fine and the backup taken of it unusable. It happens, it means the backup itself was damaged or not written properly. That's why you should backup with the checksum option and do restore verifyonly after doing the backup to verify that nothing went wrong during the backup.

    This means for my t-logs, I do a (i) Verify when finished

    (ii) Checksum before writing to media

    Thanks again.....

  • T_SqueaKquel (3/17/2010)


    (ii) Checksum before writing to media

    No. Backup with the CHECKSUM option.

    And both are for full and log backups, not just log

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks for the suggestions Gail...

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply