Hi, I'm observing strange backup behaviour on my system. We have a large (400GB) database and one full backup job for each day of the week. Each job is set to overwrite, so while the filenames remain the same, the contents are updated. At any point in time therefore we have 7 full backups available.
Now, we recently purchased HyperBac (due to backup space and time pressure) and initially the new compressed backups didn't result in smaller backup files. I realised that retaining a constant filename and overwriting the contents of the backup file meant that the file wouldn't shrink to the new backup size.
So I deleted one of the files (Monday's) and waited for the new compressed, faster backup to run. And here's the odd thing. The new backup was compressed - to about 20% of the data file size (80GB). Great. Problem was, the reduction in I/O wasn't reflected in the backup time. In fact, the backup time was longer than when the file was 400GB - twice as long in fact!
So now I have a situation where backups Thursday-Saturday fill a small part of an existing 400GB file and take an hour and a quarter, while the backups Monday-Wednesday extend an existing 80GB file slightly and take two and a half hours.
Our storage system is an EMC SAN with 30 physical drives incl 9 SSDs as a storage pool and cache, so we should be getting better performance than that anyway, but that's another question. Every time I've used HyperBac in the past it's given me smaller, faster backups, but not this time. (This is not a HyperBac-specific question though, running the backups without compression gives the same timings - 2.5h if it has to create or grow the backup file, 1.25h if it can fit the backup into an existing file.)
The processes running on the server are pretty much the same each night, so there's nothing else taking up I/O resources on some nights and not others.
I've looked at spreading the backup across multiple files, changing numbers of buffers, block sizes, etc, and I may shave a couple of minutes off but nothing close to an hour and a quarter!
So, it appears to me that, where there's an existing file to backup into, and that file has sufficient space to hold the backup without having to grow, then the backup will run much faster. I've not been able to find anything on the web, official or otherwise, to prove or disprove that SQL Server and the backup subsystem behaves in this way.
Anyone seen anything like this?