i'm testing out some parameters for the backup and restore commands in order to speed up a monthly database refresh process. at the moment, the database is approx 1TB and the backup and restore take approx 4 hours each.
in testing out the various combinations of MAXTRANSFERSIZE, BUFFERCOUNT, BLOCKSIZE with a 25GB database, i am seeing 40-50% improvement on backup times, and about 15-20% improvement on restores. when i tread over about BUFFERCOUNT = 500, i start getting out of memory errors.
since i am using a 25GB database as a test (i don't have time, space, or a free server to test a 1TB backup/restore repetitively), does the database size affect the upper limit when you start seeing out of memory errors?
when hardcoding these parameters, how do you ensure that you never get out of memory errors? or is that always a risk?
basically, my plan was to find out the breaking point (OOM) with these parameters, then to back down a level or two from that and make a single test on a 1TB database during a normal backup and restore.
attached are my test results if you are interested.
thanks for any input!