• What you could do is set up a baseline of how things perform now. Then implement a small replication system, maybe a single table/article that has enough activity to show some impact. Extrapolate that based on the workload.

    Replication ( I assume transactional here), is a workload driven item. The changes that are in the transaction log are sent to the subscriber (via the distributor) as the same types of commands submitted on the publisher. So the impact depends on workload. However you're moving commands, not data. So an "update 1billion rows" isn't a set of 1 billion changes to data. It's the same command sent across, which is a low bandwidth/CPU item.