The past years have been a halcyon period in terms of ever-increasing processor power / clock speed. However, this has often been used to hide a multitude of coding sins. Got a major performance issue on your system? No problem, throw hardware at it, make it go away (at least for the time being).
However, anyone tempted into this "kill it with iron" approach to resolving performance issues should first take a look at the famous Tally table thread on the SQLServerCentral forums or, more recently, Phil Factor's Speed Phreak competition on asksqlservercentral. The latter presents a classic reporting challenge: for a subscription list with 10,000 subscribers, find the fastest way to provide a month-end report detailing the number of current subscribers, and the number of new subscribers and 'un-subscribers' in that month.
Some of the solutions, filled with quirky updates, clever aggregations, pivots, cubes and derived tables, are smart enough to make your eyes water, and will keep lesser mortals scratching their heads for a while. However, the effort it takes to try to understand them will bear rich rewards.
Tests performed on a million-row sample database revealed a tenfold increase in performance from a "so-so" SQL solution (4000 ms) to a "solid" SQL solution (400 ms). And as the host himself, Phil Factor, comments, even the ho-hum solutions are a lot better than much of the cursor-ridden production code that is so prevalent.
If this is the sort of reward to be reaped from tuning just one of the tens of typical reporting queries that make up your average application, then maybe the next time you are tempted to "kill it with iron", why not instead invest that time in hammering and refining your SQL queries? After all, how much money would you have to throw at hardware in order to see a comparable 10-fold performance benefit?