April 28, 2025 at 3:59 pm
Hi Team,
Can you someone suggest a few good third part backup tools, with good speed?
The requirement is looking for a tool that should work without affecting db performance, in a way it should take a snapshot of the database.
Ashru
April 28, 2025 at 6:11 pm
What about SQL Server native backup does not fit your requirements?
April 28, 2025 at 6:16 pm
Hi,
To get a better performance, I was thinking if the tool can do better, keeping in mind that the storage level file snapshot may do better.
April 28, 2025 at 7:57 pm
so there are a few things you can tweak yourself...and I am under the impression that nothing is faster than a native backup with the optional settings.
three specific optional flags in the backup command, plus striping the backup into multiple files is the way to go, and you should see double or triple the throughput.
--These three items are my better( best?) practice for large files, along with lots of files for @Splitcount
MAXTRANSFERSIZE = 2097152,
BUFFERCOUNT = 64, --50
BLOCKSIZE = 16384, --8192
EXECUTE xp_create_subdir '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\';
BACKUP DATABASE [PPMPro]
TO
DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_1.bak' ,
DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_2.bak' ,
DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_3.bak' ,
DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_4.bak' ,
DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_5.bak' ,
DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_6.bak' ,
DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_7.bak' ,
DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_8.bak' ,
DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_9.bak' ,
DISK = '\\nacifs\DataManagement_Backup\AdhocBackups\stormbase\PPMPro\PPMPro_backup_10.bak'
WITH NOFORMAT,
INIT,
NAME = N'PPMPro-full Database Backup',
SKIP,
--These three items are the AN best practice for large files, along with lots of files for @Splitcount
MAXTRANSFERSIZE = 2097152,
BUFFERCOUNT = 64, --50
BLOCKSIZE = 16384, --8192
COMPRESSION,
NOREWIND,
NOUNLOAD,
STATS = 10
--GO
Lowell
April 28, 2025 at 8:09 pm
In addition to his other tips, as Lowell illustrated but didn't explicitly advise, make sure you are enabling compression on your backups.
Is server virtualized? Make sure you have sufficient I/O for data reading & writing.
Make sure your backup I/O is as isolated as possible from your regular DB I/O -- backups on separate disks/separate network adapter.
How large are your databases & backups? Do some databases have archive data that doesn't change? Those could potentially be moved to read-only filegroups or databases that don't continue to get backed up.
April 29, 2025 at 1:01 am
Thank you Lowell and ratbak. Noted the points.
This is stand alone server, and about 1.5 TB size of db size.
April 29, 2025 at 12:45 pm
Storage level backups can be (gotta be cautious about the language here) wildly superior to SQL Server native backups. Pur Storage for example, chef's kiss. But many of the hardware solutions, and just about every 3rd party software solution, that I've worked with fail and fail at one point. Every single time I've been involved in evaluating anything other than native SQL Server backups, I've asked one question: Show me a point in time restore please?
Either they can't do it at all. Or they do it extremely poorly. Or, it takes radically longer than our Service Level Agreement (SLA) for Recovery Time Objective (RTO). With some very clear exceptions in the hardware space, I've just found 3rd party backup software to be wildly inferior to the native stuff.
This is a hill I've chose to die on in multiple occasions. Haven't died yet.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 29, 2025 at 2:17 pm
This was removed by the editor as SPAM
April 29, 2025 at 3:19 pm
Currently using Veeam. It has a short I/O freeze when it backups when VSS writer orders SQL to pause for a few seconds.
It makes it easier for the sysadmins in backup lifecyclemanagement.
A single database point in time works reasonably but I haven't tried parallel restores yet. It uses a spare sql server to restore/replay the logs.
April 30, 2025 at 10:41 am
I agree, San Level snapshots like Veeem and Pure are awesome; they issue the dbcc freeze command, and the backup, even when say, 30TB, takes seconds. And then issues the unfreeze command, and puts in entry in your msdb backup history.
so i took a look, and i have some measure examples from a server i took over.
There is potentially another gain in there as well:
In one case, my SAN has a uncpath like \\servername\DatabaseBackups, but under the covers it's actually a cluster of multiple servers(24 in my case...big environment).
instead of sending to whatever round robin DNS resolves to the servername at the moment, by sending each stripe to an individual node by IP address, I can reduce the time even further...but that's and end case that requires a file server cluster,
the command modifies a bit to look like this:
EXECUTE xp_create_subdir '\\10.0.100.100\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\';
BACKUP DATABASE [Poison_pill]
TO
DISK = '\\10.0.100.100\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_1.bak' ,
DISK = '\\10.0.100.101\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_2.bak' ,
DISK = '\\10.0.100.102\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_3.bak' ,
DISK = '\\10.0.100.100\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_4.bak' ,
DISK = '\\10.0.100.103\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_5.bak' ,
DISK = '\\10.0.100.104\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_6.bak' ,
DISK = '\\10.0.100.105\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_7.bak' ,
DISK = '\\10.0.100.106\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_8.bak' ,
DISK = '\\10.0.100.107\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_9.bak' ,
DISK = '\\10.0.100.108\DataManagement_Backup\AdhocBackups\stormbase\Poison_pill\Poison_pill_backup_10.bak'
WITH NOFORMAT,
INIT,
NAME = N'Poison_pill-full Database Backup',
SKIP,
--These three items are the AN best practice for large files, along with lots of files for @Splitcount
MAXTRANSFERSIZE = 2097152,
BUFFERCOUNT = 64, --50
BLOCKSIZE = 16384, --8192
COMPRESSION,
NOREWIND,
NOUNLOAD,
STATS = 10
--GO
Lowell
May 6, 2025 at 10:03 am
Hi Team,
Can you someone suggest a few good third part backup tools, with good speed?
The requirement is looking for a tool that should work without affecting db performance, in a way it should take a snapshot of the database.
Ashru
Hi Ahsru,
If you have LTR in mind , then its always better to use a third party software like Veeam or Avamar or similar tool which have good dedupe capability and can the backups can be made available any time. If you only want to speed up the backup then as Lowell mentioned maxtransfersize, buffercount & stripping the backups will help. Make sure to test your backup with these options to get it right , restore also needs to be considered.
May 6, 2025 at 6:46 pm
Thank you everyone, it is nice discussion.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply