@Jeff how do you do parallelism in SQL driven by an Agent job? Note that I don't mean parallelism in execution plans; I mean actual parallel operations as in SSIS control flow tasks without precedence constraints or SSIS data flows using the multicast operator etc.
The @jeff reference you included points to the wrong "Jeff". Since I'm the only "Jeff" (so far) on this thread, I'll assume that question was directed at me.
The other folks have eluded to how I do it. I don't have SSIS loaded as an application on most of my servers but that doesn't prevent building a "Maintenance Plan" that calls stored procedures in parallel and then calling that either through a permanent or on-the-fly job as the others have stated.
An example of this is a "balanced" stats update routine I made where I set things up do to things both in parallel paths and serially within the paths to make it so the two paths have a different number of databases that the stats rebuilds occur on that I've previously measured for duration so that the two paths complete at roughly the same time.
Except for things like that, I've found that needing to "go parallel" with executions frequently means that whoever wrote the procs being executed have failed to make them adequately performant to begin with. With that, I'll help folks with their performance rather than "caving in" and setting up parallel runs.
I'll also state that I'd rather setup asynchronous runs between two or more nearly identical jobs that check on conditions of data (or whatever) to see if something needs to be done. For example, "file sniffers" to see if there's anything that needs to be done based on the existence of files in a staging directory.
An example of the later is a "STEPS" (Standardized Text Extraction and Parsing Subsystem) system I built to import "spotlight" files from Double-Click.net. The company I built it for was receiving hundreds of such files a day and they simply couldn't handle the load because their current system took a whopping 45 minutes just to get ONE file "ready" for import (it didn't actually do an import... they were all changing variable number of columns files and had to be pre-normalized for them to do an import... it was horribly ineffecient). The first thing I did was trash their ineffective "Tower of Babel" code and rewrote it all using only T-SQL. The end result was that I was doing the import, validation, normalization, and "Upsert" to the main table for 8 disparate files all in less than 2 minutes (and that was in SQL Server 2000 on old spinning rust 32 bit hardware). Then, I wrote a "Conductor" (think conductor as in orchestra) system (nothing sophisticated) that would keep track of the files as they came in and 4 nearly identical jobs that would pic the next unprocessed file to do the import with.
It didn't take long because of the rewrite I did to realize that just one job running in a loop was all that was necessary even with the number of files being received in an 8 hour period (sometimes more than a thousand, which only took a bit more than 4 hours to process in total).