p.s. The essential key for me was learning how to make a shedload of test data in virtually any form I needed it. When it come to performance, you need a LOT of test data. With only rare exceptions, the minimum size of my main test table for whatever I'm working on is a million rows (which is tiny compared to today's databases) and frequently much larger. Here's a couple of links to get you started in that area...
If you want to see an extreme for generating test data, try generating a million node hierarchy. See the "HierarchyCode.zip" file at the bottom of the following article. Without being able to generate such a hierarchy auto-magically, on demand, and in a snappy manner, there's no way that I could have developed the new method for converting a million node "Adjacency List" to "Nested Sets" in 54 seconds instead of it taking something like 2 days nor could I have developed the pre-aggregated hierarchy in part 2 of that 2 part series of articles.
The really cool part about knowing how to generate large amounts of custom test data rapidly is you learn a whole lot about data and a whole lot about how to handle it... and handling data (lots of it) is one of the primary goals of any training for RDBMS systems.
is pronounced "ree-bar
" and is a "Modenism
" for R
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
"If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
"Change is inevitable... change for the better is not."
When you put the right degree of spin on it, the number 3|8
is also a glyph that describes the nature of a DBAs job. 😉
How to post code problems
Create a Tally Function (fnTally)