• The newest MCTS exam 70-433 Microsoft SQL Server 2008 Database Development test so far to my knowledge is completely silent on test-driven development techniques.

    The team system of Visual Studio has had tools for creating test rows for a while and has versioning which can document database objects in an all-at-once sense. Many including myself don't have a team system license. I use the Red Gate tool to build rows.

    I don't know that there are many resources found at MSDN or other Microsoft-sponsored learning/reference resources on TDD for database development either.

    It looks like we're still in the wild wild west of sorts on this. The original question is a good one because there's not a lot of authoritative sources right now to opine on the matter.

    Another question that I'd like to broach is the question of "to what extent should we test and when?"

    It would be a stretch to say that everyone should write a test before they write any code for database development, period.

    Tests assume solid requirements. In the early stages, requirements might be fluid and specific tests might not be available. If we're working in Oslo and using MSchema, creating entities and defining relationships and having our T-SQL code expressed for us, then we're writing code before developing tests for it and the schema is changing with relative frequency. So I think there's an obvious collision going on between test-driven development and the newer model-driven development techniques.

    Alistair Cockburn, one of the founders of the Agile movement says this: "It is clear to most people by now that no one methodology will fit every software project." This can be applied to test-driven development as well. I think there's "too light" and "too heavy" an application of TDD for any given project.

    If you're working on a nuclear reactor with a dozen other database developers simultaneously, then you might want to write all tests first before creating anything. Maybe you'd go to INFORMATION.SCHEMA and test for the existence of your table, all your columns with their data types, constraints and so forth before you actually create the table. But it might seem odd to write a test first for a table when you're the only developer working on a simple application where there are no bad effects to speak of from a bug.

    I think Cockburn's Crystal "mineral analogy" can guide us. Minerals have hardness and color--or two axes. The color changes from clear to yellow, orange, red, magenta and blue as then number of developers and size of the application go up. Let's say that's the X axis. The Y axis is "hardness" which describes how critical the system is. A nuclear reactor is critical along with other life-critical applications--so those would be "hard." Applications get softer from there the less that's on the line due to a bug.

    So perhaps less formal testing is sufficient for smaller, less critical projects. That said, I think all database developers would benefit from being able to write formal tests and use a framework where they can repeat tests for when a project calls for it.

    Bill Nicolich: www.SQLFave.com.
    Daily tweet of what's new and interesting: AppendNow