You could also use file share witness (read up on the pro's and con's)
I'd say 2 servers (vm or phy) per site where each site has it's own local AG on the 2 local servers in synchronous mode (zero dataloss) and the 3 offsite readable secondaries on the second server of each location.
That way each office reads and writes to their local primary AG, and all changes are replicates in the background to the local sync secondary and the 3 remote async secondaries.
Assuming the usual read to write ratio of 98%-2% only 2% needs to go offsite.
And if each site is setup the same way, each location has ALL the data available to read, even if both internet links go down.
Writing however is a different beast, and will be the trickiest part.
The connection strings of SQL AG's allow for an INTENT SETTING, that way you can split READS to go for the (local) secondary, and WRITES to go the the PRIMARY.
But this will mean your application has to be built for split read and write conn strings to take full adavantage of this architecture, and therein lies the rub for near all the legacy stuff that only supports a single conn string.
Do you see the big picture i'm trying to paint here?
You use 4 AG's to spread the data around, and use the special conn strings to connect to the (offsite) primary for writes, and let SQL deal with the hassle of figuring out where the pri and sec are and what status they might have.