Hello Kristen. To address your concerns, messages very rarely fail to send. For a message to fail sending, pretty much the only cases I can think of are that the destination is unreachable or the transaction is rolled back. And even with the destination being unreachable, it will send the message as soon as the destination is back assuming the conversation is still open. Service broker is fairly resilient like that. That being said, there may be cases where you want to re-send a message if it doesn't arrive. These can be a little bit more tricky IF you don't keep an audit log. My system has a log of all messages that were sent and received that is kept for 1 week. If a message gets "lost" or processed incorrectly, end users are pretty quick to notify me, plus we have reports we can run to validate that data is accurate between the two systems within a margin of error (messages in transit while running the report). In the event a message is processed incorrectly, I can validate the data that went across and every time it has been due to a human-level process that changed that resulted in a bad message going across the pipe. Since these messages are saved, it is easy to validate that the messages are good (or not) and in the event a conversation gets closed prior to processing all of the messages, we have an audit log so we can re-send any unsent messages.
The advantage of Service Broker over batch upsert is that it is done in near real time and done asynchronously. For us, the near-real time was the important part. Some logic in an in-house application relies on data inside the ERP system. Our options were to use a linked server (which was the previous solution but it was causing performance issues), scheduled task to copy data over (this ran into problems with the data not being close enough to real-time), CDC (which at the time would have required us to upgrade from Standard to Enterprise edition), drastic code changes to alter code logic on the C# side to pull data from both the ERP and the product database, or Service Broker to copy the relevant data from the ERP to the product database in a near real-time manner. Of our options, Service Broker was the most cost effective and least risk approach.
XML is a pain in the butt to work with, but on the source side, you just grab all of the changed data and toss it into an XML object. THAT part is easy to do. It is on the destination side that things get tricky because now you need to strip apart that XML. Nice thing is, it is easy to build up some sample data to work with by having your triggers in place on the source side that are simply capturing the data and tossing it into a table without Service Broker sending anything. Do this for a short period of time with a trigger on a source table to capture what you are going to need to process, then write your query to handle it. Then as long as no metadata changes on the source database related to your tables that are in use by service broker, you will have no problems. With us, the biggest pain in the butt is when we upgrade the ERP, it drops all of the tables that have schema-level changes which means all of the triggers I made go poof. Easy to catch in the test system though during a test-upgrade, so no harm there and minimal problems after live is upgraded as my scripts to recreate the triggers are ready to rock.
Using a trigger to capture data changes is another solution you could go with, but you run into the problem of how often are you going to query your source table to update the destination tables? Is this something that happens every hour or day or week or minute? how "real-time" do you need the data and how frequently is it changing? If you only need hourly refreshes on the data (for example), then service broker is overkill. If you need near real-time, then service broker is helpful. Linked servers could work as well and just pull data from the source (presuming you can live with the potential performance hit). With service broker you can also just pull across the columns of interest.
The only problem I see with your trigger to just copy the row to a staging table is that that trigger is going to create blocking on the staging table which may affect the performance of the tool. Plus you are going to end up with duplication of data inside your database (the actual table and the staging table are going to hold some of the same data).
The approach I would take (if it is allowed... vendor tools don't always like you playing with their tables) would be to put an UPDATE trigger on the source table that updates the modify date of any rows that are updated. This would make your source database "modify date" more reliable. A column that is not reliable is (in my mind) useless so why bother keeping it as is?
And once you have a sample piece of XML to work with, it isn't too hard to make it bend to your will. Especially with the process that is being described as you are likely going to have a root tag followed by the table followed by the data, so it is a pretty easy model to work with for XML. Our process is to have a conversation type for INSERT, DELETE, and UPDATE so we can process it that way. Alternately, you could have that encoded into your message.
To summarize - if I need near real time data and I can't use a linked server (performance issues, company policy, etc), then service broker is going to be the option I go with. I can pick exactly which tables and columns are handled by it and it will provide me with a near real time method of processing my data.
If near real time data isn't my goal, I am probably going to look at offloading the work to something like SSIS rather than triggers. Here, I can pull only the columns I care about and the only processing on the source database is going to be the initial data pull. Everything else I can process on the SSIS server or at the destination side. This way I have minimal impact on the source system.
Plus, since you already have service broker set up and working, you may be able to piggy back on the existing setup for sending messages across. May need to make a few new objects depending on how things are set up, but a lot of the setup tasks are already completed.