Yep... I realize that my response is a year late, but well done, Chris... Very nice intro to some of the performance measuring tools that are available in SQL Server.
Just a couple of thoughts for everyone, though... having tools without knowing what to expect doesn't really do much...
An example from work... 3 years ago, one of our developers identified a particular stored procedure as a very good candidate for optimization in light of large volumes of data. She very successfully got that process down from 30 minutes to 6 minutes and change. Yippee!! Everyone was happy! Having been recently hired into the company just for such optimizations, they asked me to optimize the same code without me having any knowledge of the recent optimization success of the developer. I got the process down to a little over 4 SECONDS and used less resources to boot!
Am I bragging? Sure, a bit... who wouldn't? But it points out a much larger problem... a lot of developers simply don't know what to expect for performance and, in light of such success as getting a 30 minute process down to 6 minutes, it's hard to convince folks that their perception of performance code needs a little work. They just don't know and having the tools to measure performance without knowing what's actually possible is pretty meaningless. That's where in-house training, mentorship, and performance standards come into play.
Another problem, particullarly in the development phase of a project, is having data to test with. One particular example that comes to mind is one of our developers was asked to write a "grouped running total" based on CustID. Having no data to test against, he used the Northwind DB to test the wonderful correlated subquery he had just been introduced to... Heck, it didn't have a cursor in it so it must be fast, right? His testing showed [him] that it was fast. The code was promoted to production and immediately had performance problems on 1,000 rows... it was taking seconds instead of sub-seconds. When it hit 10,000 rows, it took minutes to run. Millions of rows were expected in the next couple of months.
The problem turned out to be the triangular join (about half a cartesion join) he'd used in the correlated subquery. A lack of data to test against was the killer here.
There's a third, more insideous problem than not knowing what's possible or having enough data to test against. That's TIME... going from 30 minutes to 6 minutes is considered "good enough, next problem please". Managers hover over developers asking "are ya done yet?" Managers and Project Managers, not having a good understanding of why it takes so long, will frequently estimate time for the development of a proc simply by how many blocks the functional diagram (if there is one) has in it. And, of course, everybody, including the developer, wants things done as quickly as possible. What you frequently end up with is code that works great provided there's no real volume of data to work against because the developer didn't take the time to gen test data (pretty easy, actually, if you know what you're doing) or didn't take the time to see if they could do better. The first "good enough" is hardly ever good enough and managers just don't understand how much time is expended fixing performance problems down the road. It's much cheaper to do it right the first time but you need time for that.
Of course, the fourth and most important problem is the whole bloody relationship between management and developers... management wants the code to be done NOW and the developer likes the paycheck they're getting. The developer knows that in the face of a tight schedule, they can write performance challenged code to "get the job done now" and that they'll be given more time later (really good for consultants) to fix the performance problems which, by the way, are percieved as a different set of problems by management. Usually, neither management nor the developers are held accountable for the large expenditure of money and resources it takes to convert performance challenged, crap production code into something that isn't. Again, that's where some performance standards and some volumetric data would come into play.
Measure all you want... have your developers become very skilled at measuring the performance of their code. But, if you don't also train them as to what is possible/expected and actually give them the time to write performance based code, then the performance measuring tools and their results are just so many pretty lights on a Christmas tree... in July.