Limiting bandwidth usage for backups to Azure with new MAX_IOPS_PER_VOLUME option?

  • We typically haven’t found ourselves using SQL Server’s 2014 backup to Azure feature in a production environment until recently when we were upgrading our SQL Server 2008 R2 Cluster to SQL Server 2014 and found the Microsoft DPM doesn’t really like backing up databases that live on CSV volumes. So as a stop gap measure until our new backup solution is in place on our new cluster, we decided to take it to the cloud!

    We knew some of the Limitations and were willing to deal with them, but then late one night just after manually executing the backup job, my VPN dropped and I lost connectivity to our datacenter. After further research, I discovered that my inability to connect to our datacenter lasted only as long as the backup to Azure ran (it was a large database in this case). In the case of this datacenter, we did not have burstable bandwidth and never needed this much bandwidth until this time.

    This left me with two action items, fix the burstable issue with our datacenter and find out if the new IO option in SQL Server 2014’s Resource Governor would allow us to control how much IO was used why uploading the compressed backup file to Azure. I assume the backup file is stored locally on disk, then streamed to Azure, but could be wrong with that assumption and the backup engine could stream (and compress) it directly to Azure blog storage.

    Can anyone help with this question? In the example above, we were maxing out a 30MB pipe and would prefer to only use 10MB for backup to Azure. Would limiting the IOPS off the disk (MAX_IOPS_PER_VOLUME) during the upload solve the problem?

    I understand that can be accomplished with our layer3 switches and firewalls and my network admins are looking in to this, but can SQL Server 2014 handle it?

    Thanks in advance.

    -Eric

Viewing 0 posts

You must be logged in to reply to this topic. Login to reply