I have to admit... I've never seen an ideally designed database (usually, too many cooks in the design with too little knowledge, you know how it goes) so I can't argue that point either pro or con.
What I have found, though, is that many folks will join 10 or 12 tables trying get all of the data for a large update all at once... dividing the problem into smaller more managable set based pieces using temp tables has allowed me to convert other folks 8-10 hour runs to 8-10 minute runs. Is it a work-around for poor design? Dunno for sure (probably)... all I know is that by storing only the data I need to work with in a temp table, doing 1 or more updates on the (possibly denormalized) data, and then firing one quick update with only one join between the temp table and the target table has shaved many hours from many sprocs and, because the update runs so fast, has actually contributed to substantial decreases in the number of deadlocks we were experiencing.
--Jeff Moden
Change is inevitable... Change for the better is not.