These kinds of issues is one reason I do not recommend auto-patching and restarting clusters of any kind. These should always be manually patched and restarted to ensure that quorum is always maintained during the process and each node is patched, restarted and ready before it can be utilized to support any services.
One of the problems with maintenance in an AG - is the non-shared storage between nodes. Ideally, in this type of environment you would have network storage available and setup your maintenance to backup to that network storage using the UNC path. This would alleviate any issues/concerns with which node is performing the backups.
If possible - when patching a cluster I recommend failing over only a single time, that way you run for a month on node 1 - the next month on node 2 and the following month are back to node 1. The order of operations would be:
- Identify the current node hosting the services
- Patch and restart the other node - validate system is back up and available.
- Manually fail over services to the newly patched and restarted node
- Patch and restart the first node
This reduces overall downtime to a single fail over event - and allows for patching and restarting to occur prior to the scheduled downtime. The patching and restarting does not impact the active system and therefore does not impact the users or the application - the only impact is the fail over.