We take a different approach not mentioned in these articles, and it seems to work very well for our always-connected client/server apps.
We do a lot of large/wide data set processing in stored procedures, and passing huge text or XML files from the UI did not seem to be the best approach.
Rather than using XML or table functions, we use temp tables. From within the application we create/append data to a temp table with a unique identifier for that "batch". Then we simply call a stored procedure, passing the batch_id, and the multi-row data set is available for use by the proc for whatever processing needs to be performed.
The temp table is SPID specific, so there is no risk of data collision between users, nor between multiple processes for the same user if multi-threading (because of the batch_id). Once the proc is done with the data (which is almost always the case), it simply purges that data from the temp table. If the temp table become empty, it is dropped all together, so the overhead on the server is very low.
Yes, the initial loading of the temp tables is performed row-by-row behind the scenes, but that is achieved with a single line of code from the app.