Third party backup softwares

  • Hi Team,

    Can you someone suggest a few good third part backup tools, with good speed?

    The requirement is looking for a tool that should work without affecting db performance, in a way it should take a snapshot of the database.

    Ashru

  • What about SQL Server native backup does not fit your requirements?

  • Hi,

    To get a better performance, I was thinking if the tool can do better, keeping in mind that the storage level file snapshot may do better.

  • so there are a few things you can tweak yourself...and I am under the impression that nothing is faster than a native backup with the optional settings.

    three specific optional flags in the backup command, plus striping the backup into multiple files is the way to go, and you should see double or triple the throughput.

    --These three items are my better( best?)  practice for large files, along with lots of files for @Splitcount

    MAXTRANSFERSIZE = 2097152,

    BUFFERCOUNT = 64, --50

    BLOCKSIZE = 16384, --8192

    EXECUTE xp_create_subdir '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\';
    BACKUP DATABASE [PPMPro]
    TO
    DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_1.bak' ,
    DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_2.bak' ,
    DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_3.bak' ,
    DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_4.bak' ,
    DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_5.bak' ,
    DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_6.bak' ,
    DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_7.bak' ,
    DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_8.bak' ,
    DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_9.bak' ,
    DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_10.bak'
    WITH NOFORMAT,
    INIT,
    NAME = N'PPMPro-full Database Backup',
    SKIP,
    --These three items are the AN best practice for large files, along with lots of files for @Splitcount
    MAXTRANSFERSIZE = 2097152,
    BUFFERCOUNT = 64, --50
    BLOCKSIZE = 16384, --8192
    COMPRESSION,
    NOREWIND,
    NOUNLOAD,
    STATS = 10
    --GO

     

    • This reply was modified 2 weeks, 2 days ago by  Lowell.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • In addition to his other tips, as Lowell illustrated but didn't explicitly advise, make sure you are enabling compression on your backups.

    Is server virtualized? Make sure you have sufficient I/O for data reading & writing.

    Make sure your backup I/O is as isolated as possible from your regular DB I/O -- backups on separate disks/separate network adapter.

    How large are your databases & backups? Do some databases have archive data that doesn't change? Those could potentially be moved to read-only filegroups or databases that don't continue to get backed up.

  • Thank you Lowell and ratbak. Noted the points.

    This is stand alone server, and about 1.5 TB size of db size.

  • Storage level backups can be (gotta be cautious about the language here) wildly superior to SQL Server native backups. Pur Storage for example, chef's kiss. But many of the hardware solutions, and just about every 3rd party software solution, that I've worked with fail and fail at one point. Every single time I've been involved in evaluating anything other than native SQL Server backups, I've asked one question: Show me a point in time restore please?

    Either they can't do it at all. Or they do it extremely poorly. Or, it takes radically longer than our Service Level Agreement (SLA) for Recovery Time Objective (RTO). With some very clear exceptions in the hardware space, I've just found 3rd party backup software to be wildly inferior to the native stuff.

    This is a hill I've chose to die on in multiple occasions. Haven't died yet.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • This was removed by the editor as SPAM

  • Currently using Veeam. It has a short I/O freeze when it backups when VSS writer orders SQL to pause for a few seconds.

    It makes it easier for the sysadmins in backup lifecyclemanagement.

    A single database point in time works reasonably but I haven't tried parallel restores yet. It uses a spare sql server to restore/replay the logs.

  • I agree, San Level snapshots like Veeem and Pure are awesome; they issue the dbcc freeze command, and the backup, even when say, 30TB, takes seconds. And then issues the unfreeze command, and puts in entry in your msdb backup history.

    so i took a look, and i have some measure examples from a server i took over.

    1. single file backup of a 4.7TB database used to take 21,000 seconds(00:05:50:00) or almost six hours
    2. I striped the backups into ten files, and backup time reduced to 6993 seconds(00:01:56:33) or almost two hours
    3. I then got them to accept my better practices for the optional backup parameters, and a year later, the  db has grown by another TB, and is now 6.3 TB, but backs up in 5390 seconds(00:01:29:50) or 1.5 hours consistently.

     

    There is potentially another gain in there as well:

    In one case, my SAN has a uncpath like \\servername\DatabaseBackups, but under the covers it's actually a cluster of multiple servers(24 in my case...big environment).

    instead of sending to whatever round robin DNS resolves to the  servername at the  moment, by sending each stripe to an individual node by IP address, I can reduce the time even further...but that's and end case that requires a file server cluster,

    the command modifies a bit to look like this:

    EXECUTE xp_create_subdir '\\10.0.100.100\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\';
    BACKUP DATABASE [Poison_pill]
    TO
    DISK = '\\10.0.100.100\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_1.bak' ,
    DISK = '\\10.0.100.101\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_2.bak' ,
    DISK = '\\10.0.100.102\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_3.bak' ,
    DISK = '\\10.0.100.100\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_4.bak' ,
    DISK = '\\10.0.100.103\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_5.bak' ,
    DISK = '\\10.0.100.104\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_6.bak' ,
    DISK = '\\10.0.100.105\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_7.bak' ,
    DISK = '\\10.0.100.106\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_8.bak' ,
    DISK = '\\10.0.100.107\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_9.bak' ,
    DISK = '\\10.0.100.108\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_10.bak'
    WITH NOFORMAT,
    INIT,
    NAME = N'Poison_pill-full Database Backup',
    SKIP,
    --These three items are the AN best practice for large files, along with lots of files for @Splitcount
    MAXTRANSFERSIZE = 2097152,
    BUFFERCOUNT = 64, --50
    BLOCKSIZE = 16384, --8192
    COMPRESSION,
    NOREWIND,
    NOUNLOAD,
    STATS = 10
    --GO

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • ashrukpm wrote:

    Hi Team,

    Can you someone suggest a few good third part backup tools, with good speed?

    The requirement is looking for a tool that should work without affecting db performance, in a way it should take a snapshot of the database.

    Ashru

    Hi Ahsru,

    If you have LTR in mind , then its always better to use a third party software like Veeam or Avamar or similar tool which have good dedupe capability and can the backups can be made available any time. If you only want to speed up the backup then as Lowell mentioned  maxtransfersize, buffercount  & stripping the backups will help. Make sure to test your backup with these options to get it right , restore also needs to be considered.

  •  

    Thank you everyone, it is nice discussion.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply