• What has attracted me to testing frameworks is the prospect of "automated and repeatable" tests. I want to be able to make a change to a subsystem, run the test suite and see pass and fail on all the essential features of the subsystem. If everything passes, then I can proceed with more confidence. Also, others can do the same. So, I've been using TSQLUnit for projects that go beyond basic and can benefit from "automated and repeatable."

    While setting up the tests, I have to keep the longer-range goals of "automated and repeatable" in mind--because the initial tests for the existence of objects adds development overhead in the short term to meet those longer-term goals.

    In the TDD (test-driven development) theory I've been exposed to (from Alistair Cockburn--Crystal Clear) tests are to be designed to fail until the minimum feature is put in place at which point development ceases. This keeps development very focused and organized as one feature or tightly-coupled feature set is developed at a time. This benefit alone helps justify the added effort for me.

    So in practice my first test for say developing a stored procedure is to create a test for the existence of the stored procedure. All that's been developed at this point is a decision about the name of the stored procedure. Not very exciting. However, if someone down the line changed the name, the test would fail and bring the change to attention.

    Questions at this point arise for me. It's one thing to check for the existence of a table. But what about a check for the existence of each column along with the data types, constraints, collation and so forth? I haven't to this point set up a lot of tests for DDL statements, but rather rely on a different tool for comparing versions of objects: Red Gate SQL Compare. I might try more DDL tests now that I'm familiar with INFORMATION.SCHEMA.

    I find the TSQLUnit set up and tear down very useful. I can create a set of test rows here, either hand-picked or built from another tool such as Red Gate SQL Data Generator for testing extremes in values data types and handling of NULLs.

    Also, I build tests for the existence and proper implementation of check constraints by running insert, update and delete operations on test rows and then check to make sure only those operations that should result in an insert, update and delete actually do.

    Tests for performance is an interesting area. Tests are best done if discreet--so performance tests should be separate. These probably belong in their own "suite." Again, the set up and tear down features of a framework come in handy, along with the ability to build large sets of rows. The repeatable nature of tests can really help with performance testing in conjunction with SQL Profiler and other built-in SQL Server tools.

    I haven't found a good way to automate tests for concurrency for T-SQL within a T-SQL framework--and perhaps that's just as well. Perhaps it's better done externally with a simulation of front-end application use.

    At the end of the day, tests can't replace decision logs and documentation--so those belong there as well.

    With the use of a framework, though, the days of relying on someone's afterthought ad-hoc manual tests are gone. Later if a bug is found or an enhancement is needed, more tests are added to the suite and away we go.

    Bill Nicolich: www.SQLFave.com.
    Daily tweet of what's new and interesting: AppendNow