If I misunderstood, I apologize.
What you're saying you want is a separate database to use for reporting. That separate database will consist of data from a bunch of other databases in the system. You want that separate database to be "linked" meaning, zero, or near zero, lag on the data from this disparate other systems, yet, all querying will be done locally against this copy of the database?
Is that a good summary?
If so, good luck.
There's not a single tool that's going to do this for you and do it well. A lot of people might lean on linked servers to deal with this, but they put all, or more most, of the process back on the original systems. You might look at multiple customized replication services, but that's going to be serious nightmare to set up and maintain. Mirroring is out. Availability groups are out. Both because you can't selectively pull pieces of the database into a common location.
Chances are, to make this happen, you're going to have to build a customized system. That's the only way I know to do it. An alternative might be to set up Availability Groups and read only secondaries for the databases you're interested in. Then, create your custom, all-in-one, database that pulls from various sources through linked servers. Just link to the read only secondaries so that you're offloading the processing from your production instances.
Another alternative, one I don't know much about, is SQL Server 2019 and the Big Data Cluster (hate the name). This allows you to build out a single source for querying multiple disparate sources using an application of containers through Kubernetes.