• Now that we're all up-to-speed on generating test data, 😉 you know what would be trez cool (not suggesting that Jeff should have to do it, but it would be great if it existed)? A step-by-step guide to setting up an empirical test environment. It seems to me that there are too many traps for us new player, that will lead us to incorrectly conclude that method A is better/worse/no different to method B.

    Issues to consider, for example:

    :: I've read recently that you shouldn't conclude that because a query takes X seconds to bring the data to your screen, that it's a fair representation of how long the query took to run. Most of the "execution" time could simply be shipping a million rows of data over the network. The workaround might be to run the data into a temp table or a table variable... or... something (he said knowing he was out of his depth)

    :: How should we treat the cache and buffers?

    :: How do we set up a timer. I simply set variables to getdate() at the start and end of what I'm trying to test, and datediff them. Is this reasonable?

    :: What are the pitfalls to taking the execution plan's "cost relative to batch" and "estimated subtree cost" literally? I've seen it wildly inaccurate, and not just because of outdated statistics and so on. It's often because it can't accurately estimate the cost of a scalar function, for example.

    :: I've used Adam Machanic's SQLQueryStress tool before because it seems that it can give you an idea of how the query will perform with multiple threads and iterations with a variety of parameters.

    ...One of the symptoms of an approaching nervous breakdown is the belief that ones work is terribly important.... Bertrand Russell