SQL backup with Block-Level technology

  • Hi,

    I am running Block-Level Backup software on all my servers. The backup software provider recommends putting my SQL Server Databases in Simple recovery mode. I am not sure whether this is a good idea. I can not find any documentation regarding block-level backup technology and SQL Server recovery models. The backup software does not provide an agent to truncate the logs so I would need to run a log truncate script. I tried the truncate only command but I receive an know error since I am in Full Recovery mode. Can anyone give me any direction on this.

    Thanks,

    Jeff

  • Jeff -

    By "block level backup" do you mean a specific backup package or are you utilizing the snapshot capabilities of a SAN/NAS?

    There are SAN/NAS solutions that can provide point in time recovery for a SQL Server database (up to the most recent snapshot) and depending on the vendor you may be in good shape with some caveats:

    (1) Make sure that the frequency and retention of the snapshots mirrors your recovery needs meaning that if you would normally take a tlog backup of your database every X minutes you need to be taking snapshots with the same frequency.

    (2) Make sure that you continue to take & retain a snapshots at an interval that would allow you to recover from a database corruption in a worst case scenario (e.g. daily or weekly). Corruption, the creeping crud, whatever you want to call it happens... snapshots/block level backups will have the same corruption.

    (3) Test, test, test. Have your SAN/NAS administrators restore a point in time snapshot to a new LUN/filearea and attempt to attach the database to a different machine... everything that's supposed to be there there?

    (4) How is your data being sent off site? Replication to another site or tape? You'd be surprised how many times I've seen SAN/NAS storage not being backed up or sent off site ("but it's a SAN!, we don't need backup!"...) To point #3 above, test a recovery using just the offsite backup... that can get really interesting.

    Joe

  • I should rather trust SQL Server backup/recovery then the tool you are using. If this is about the critical databases please have a good backup strategies.

    "More Green More Oxygen !! Plant a tree today"

  • Joe,

    It is a software package called Filesx. My plan is to Backup every 15min to a disk and replicate the backup to a remote location and also backup the backup repository to tape. My problem is the truncations of the SQL logs. Should I put my database in Simple Recovery mode so I don't need to truncate the logs? I will be backing up my log files and my databases at the Block-Level every 15 minutes. I provides a full volume image and it has forever incremental capabilities. I have never put a critical database in simple recovery mode so it make me nervous because I don't know what to expect. Will it have an affect a performance or even it recovery capabilities?

    Thanks,

    Jeff

  • I don't trust block level, and here is why.

    Suppose I have a large transaction and it updates blocks 1-100. In the middle, I have a failure and blocks 1-99 were transferred, but 100 wasn't. How does the recover solution know this? The transaction would possibly be inprocess still and cause issues when you restarted SQL Server. It might have issues with rollback/rollforward.

    Blocks work great for smaller files but because SQL depends on a consistent unit of work, which may span lots of blocks, this makes me extremely nervous.

    I wouldn't run production databases in simple mode if I needed to recover to a point in time. If you can lose 15 minutes of data, then run in simple, but I'd err on the side of caution and leave them in full mode, running T-log backups with my fulls.

  • If you're recoverability requirement is point in time recovery, you'll need it in full mode. Keep your tlog backups as often as possible. You don't need to truncate the tlogs.

    _____________
    Donn Policarpio

  • Hi,

    It it ok to have a fixed size transation log, otherwise my transactions logs will grow about 2gb a day?

    Thanks,

    Jeff

  • It is a good practice to pre-allocate your tlog size, but you may want to leave it to auto-grow (unless you do not have plenty of disk space), otherwise you might find your db non-operational because it can't accept further transactions because your tlog is full. If you have a usual 2Gb trans/day, and you have a regular tlog backup, every hour, or every 30 or 15 mins, then you don't need to worry about your tlog growing. Of course, except when you run heavy trans that gets your tlog bloated quickly over your preallocated size, then it will auto-grow beyond it.

    _____________
    Donn Policarpio

  • We used doubletake forever at my old job, they use checksums to confirm a good write commit. And you can set it up to email or otherwise alert when the block level replication goes out of sync. It's actually a very solid way to do DR but it's resource expensive, that's the real drawback of block level replication, not reliability.

  • Please note: 5 year old thread.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply