There's a lot to unpack here, but I'll try to answer some of it.
First question, can you identify which tables have been changed between two dates? Yes. The key to this is sys.dm_db_index_usage_stats. If you capture this one Saturday and then capture it again on a second Saturday (Monday, Tuesday, whatever), you'll see differences. That will show you how much a given index has changed. Now, this is subject to reset due to restarts, failovers, stuff like that. That means it's not a flawless measure. However, it's a quick & easy way to make this determination without setting up, for example, an Extended Events session to capture every UPDATE/INSERT/DELETE and then aggregate that data. Now, you could also look to Query Store to capture the updates and then search & aggregate from there. It may be more accurate, but it's going to be a giant pain. I'd stick with dm_db_index_usage_stats, but understand the limitations there.
Second, how do you tackle performance issues on big tables updated frequently? Carefully. Seriously though, this is far too open-ended to give you a precise answer. I had tables that updated extremely frequently that we had to run a statistics update once every 15 minutes to ensure we were getting good stats. Now, this lead to lots of recompiles on procedures, so nothing is free. Updating statistics does use resources and it does block, a tiny amount. However, it's the recompiles that are the big pain here. So, for example, if your CPU is already maxing out, forcing a lot more recompiles on the server could be a serious problem.
Third, already addressed blocking. Yes, but not much, recompiles.
Fourth, what are the scenarios where you sample differently? That one is completely deterministic. Most people, most of the time, auto update of stats, which is sampled, is adequate. Some systems absolutely need a FULL SCAN done on stats to ensure as much accuracy as possible. Some systems, absolutely barf when the stats are updated using a FULL SCAN as the accuracy picks up on just how badly skewed the data is, whereas a sampled scan, being less accurate, results in better behavior. This is one is all about understanding what's happening with the compiler and how it affects your queries.
In short, there's not a single correct answer here. However, I do, very much, believe that having different processes do different things to your stats & indexes throughout the week, is probably a recipe for disaster. While I wouldn't suggest a single 'do this and nothing else, everywhere' approach, I'd still suggest a controlled and thought out approach as opposed to Dodge City on a Saturday night.
I hope all this helps. If there's more stuff you need, let's focus on tiny aspects of it at a time. Statistics is a very broad & complicated topic, so asking ALL the questions at once gets really hard to answer.