It seems weird that the offline upgrade occurred on all three nodes. Once the database is upgraded, it's upgraded. The changes made to the database during the upgrade propagate to the secondaries as normal replicated transactions.
I encourage you to test again with trace flag -T3600, which will write each of the upgrade steps with their starting times into the SQL error log. By comparing the starting time of each step with the starting time of the next step, you can see where the time is spent.
Those steps shouldn't repeat on failover.
Also, be sure you are using the latest patch for SQL2017; improvements to the upgrade process have been rolling out with CUs.
There are several things that can affect upgrade times, but the first thing I would check is the count of indexes. Each index gets examined during the upgrade. If you have tables and indexes that can be dropped prior to upgrade, you will speed up the process.
We would want all of the nodes to be completely upgraded so each has to be failedover to in order to apply upgrades to all user databases
This is not necessary. AG replication will upgrade the secondary user databases. Upgrade steps change the database the same way user transactions do. You only need to fail back for infrastructure reasons - distributing workload/instances across nodes, moving workload to a preferred server, and so on.
After the first failover to an upgraded instance, you should begin your post-upgrade steps (changing the compatibility level, resampling all stats, kick off a full backup, etc.). Those changes will replicate to the secondaries as well, and only need to be done once.