Elliott! That's exactly what I was looking for. I knew it was possible; just hadn't dug into it yet.
I'm going to take something like this and divide the work according to the DBA's analysis about server demand. Meaning: Some times during a 24 hour cycle the demand may be such that the DBA wants to reduce the load and can throttle it by reducing the number of simultaneous packages running. The quantity of servers will remain the same; just the number of parallel packages running.
Step1: get a list of remote servers with Row_Number() OVER (Partition BY ServerName Order by ServerName) RowNum.
Step2: get usage level that DBA wants to use. This will be a number between 1 and probably 10.
Step3: Take the total number of servers and divide it by Step2
Step4: Assign ranges to the number of variables DBA chose to use. While intCnt <= # etc. etc.
Step5: Where you split out the processes I'll put an expression to ensure only the ranges identified are allowed to run.
Step6: Run stress tests on ranges between 1 and 10 to see just how much the network and servers can handle without negatively impacting the end-user experience.
Of course, there will be intRange1Begin, intRange2Begin, etc.
This means I can process as many servers as the network can handle and loop as many servers, within each range, as I need to.
My Question: Do you have any recommendations for tracking all of the work that is going on? Can these individual threads use the same log file or will I have to build one for each process running and then combine them in the archive process at mid-night?
I currently have a flat file(comma delimited) capturing the Steps and work completed so, god forbid, I have to find something I'll at least have the records logged. I was thinking about archiving the log each night at mid-night so it doesn't get too big and slow down the overall processing.
Thank you very much for the visuals. A picture does say 1000+ words.