Here is a quick design scenario and question
I have a common table in two databases on seperate servers
Server1 -> Database1 -> TableABC (Used by website, mostly single row selects)
Server2 -> Database2 -> TableABC (Used by internal tools to query large amount of data)
The structure are the same. The table on Server1 is updated from the website and the table on Server2 is updated by the internal tools.
The procedures on either servers use Link Servers to communicate and get the records
There is a plan to change this approch and keep the data in a single source and here is the solution proposed by my colegues
1) Merge the tables and keep it on Server1 (website server)
2) Create a Transactional Replication to create the table on Server2. (which can be used to run large queries)
3) All updates from Server2 use link server to update the table on Server1
I agree to this but my concern is that on Server2 I may read a set of records from the Replicated copy of the table, but would update changes directly on to the table on Server1 via link server (do you see any chances or replication breaking)
Also, this table contains millions of rows.
Any suggestions or ideas for redesign?
Any help is much appreciated