My approach then would be something completely different then. If you are changing the back end disks I would do a completely different approach than what you are doing. Might be more work (VERY VERY likely to be more work), but is much safer (my opinion).
What I'd do - make a secondary backup server VM now. Make that secondary backup server VM have the NEW disk from the new SAN (presuming when you say "new storage" you mean new SAN). At this point, you have 2 backup servers and 2 locations where you can store your backups. The new server has no backup data there, but the old one has all of your backups. Next, find your downtime window and copy the data over (unless your storage admins already moved the data, then skip this step). Next, set up rsync on both servers to copy over new files at some point after the backup has completed. This way backups between the two servers are in sync AND you have a secondary backup server.
Now, go into each and every SQL backup job that you have and then my next bit of advice changes depending on how you do backups. If you have your own in-house scripts, hopefully they have a parameter for the backup location and backup file names. This is the easiest (my opinion) way to handle it. You run the backup against the primary backup server. If that stored procedure call fails (ie backup fails), assume the VM is offline, so fire your backup to the secondary backup server. If you have maintenance plans in place instead of an in-house backup script, then set your failed backup step to be to do a backup on the secondary server. If you use Ola Hallogren's scripts, I don't know much about those, but I imagine it is a similar process as the "in-house scripts". If you have some other method for backups, then follow their steps. Either way, get at LEAST 1 secondary server for your backups and set up some form of replication of the backup data (that is what rsync will do for you).
This might sound like overkill, but if you ever hit a point where your primary backup server VM crashes during a disaster recovery moment, you can just flip on over to the secondary and let the server admin team deal with the down server while you continue restoring your databases! Plus, you can happily do maintenance on the primary backup server and know that the secondary will handle the workload and your backups will run happily.
The above assumes that the OLD storage is still available for use.
If you absoltely cannot leave the backup server on the old storage for some reason (old storage being retired), I would request to get enough disk to do the above with a minimum of 2 VM's, even if they are using the same physical disk. This way when you need to do maintenance on a single backup server, you can fail the disk over and have the backup jobs continue to run happily.
My preferred backup hardware strategy is to have a minimum of 2 servers (physical or VM) with isolated disks (so one disk corrupting doesn't mean backups are toast) and at least 1 offsite backup. My setup isn't perfect as we have 3 physical boxes in the same blade array (single point of failure if the blade array fails, but if it fails and the SAN is still good, we have our backups still, just need to replace the blades), the disk is a shared floating SAN disk (floating in that it is hosted on server A, unless server A is offline, then it is hosted on B unless B is done then C) and we backup that to tape which is shipped offsite nightly. So our "worst case scenario" is 1 day of data loss in the event the server room explodes 1 second before we grab the tapes to ship off site. Since the backups are pushed to tape hourly, if the SAN dies, the tape backups are good to get our data back.
What I would recommend is working with your server team and determine what your RPO and RTO is. Right now, having a single server handling your SQL backups, that server dies and needs to be recovered, how much time are you allowed to be down? If you plan for an absolute worst case - your server room and all equipment in it is destroyed. Not likely to happen, but it could. Determine how long each of the small parts takes to get back up (your RTO) and how much data will be lost (RPO). The worst case scenario is not likely to happen, but all parties impacted should be aware of how long it'll take to fix it. AND if it is all documented, you have your butt covered if the problem ever does happen. And it covers all the small bits inbetween. For example, if the SAN were to have a power surge that fried the controller and caused all the disks to blow their boards, your SQL stuff is down (as is everything else). The big boss comes to your desk and asks "how long until the SQL is back online", you can grab your document and say "approximately 2 and a half weeks. We need to get the SAN back online. Since it is hardware failure, the SAN will need to be replaced. The downtime for that is 2 weeks to order a new one in. Once that gets in, we will need to configure it and restore from our tape backups. Here, we are looking at about 72 hours once the SAN is online. Once the backup to the SAN is complete, the databases will come online on their own.". The big boss should have seen this document too and they may think to themselves that buying a cold spare SAN can save 2 weeks of downtime in the event of a catastrophic failure is worth the cost of a second SAN sitting unused in a storage room somewhere. On the other hand, maybe your company is OK with having the company down for 2 weeks while a new SAN is ordered in. I don't know your business.