SQLServerCentral Article

Unit Testing With Visual Studio

,

So much for all my work

My last article was on using TSQLUnit and CodeSmith to automate unit testing of stored procedures in SQL Server databases. I had gone to quite a lot of work to come up with the CodeSmith template that would accurately generate the execution of stored procedures within our test environment. Time marches on, and here, a couple of months after finishing all that work, it's going to get tossed. Microsoft has almost released the Visual Studio Team Edition for Database Developers, (Client Technology Preview #6 is the last one prior to release). Included is some seriously slick unit testing that is automatically generated against your database code. In other words, everything I had just built.

A quick review, because it's worth it. Automated unit testing of this type is specifically targeted to support Agile development methodologies (though it can be rewarding as a general approach too). The iterative approach to database development and design requires test driven development to work and is best described in the works of Scott Ambler:Agile Database Techniques and Refactoring Databases. Microsoft has embraced the principles of Agile development internally, documented with a couple of different approaches here. If you're not looking into this as part of your own development, you should because it clearly has momentum within the industry.

Assumptions

For the purposes of this article, I'm going to assume several things, not far off from the previous article. You've got a working a knowledge of TSQL, C# and Visual Studio. I want to add the same caveat as last time, if you spot issues with my methods, since I'm not a programming expert, please let me know in the comments section. I'm also going to assume a nodding acquaintance with the Team Edition for Database Developers (because I don't want to write a how-to on using it).

Creating a Test

As with the rest of the tool, this is pretty easy. You've got a wizard supplied with the tool that can be accessed when you're looking at the Schema view of the database you're working. Simply right click on a stored procedure in the list and select "Create Unit Tests..." to get a selection list of all the stored procedures. Select the procedure or procedures you want then select an output test project (I used a C# project) to create or one that already exists in the solution.

You're then required to configure your test environment. Choose a database where you will run your tests. Optionally you can validate tests against a second database, also chosen here. You can also decide if, prior to running tests, want to build & deploy your code. All this can be edited later to modify the behavior of your tests.

The tool immediately generates TSQL that it will use to test your procedures. It declares a return value for the proc and a series of parameters based on the parameter list for the procedure. These are all correct in data type and direction (OUTPUT is picked right up). I used Adventureworks for the article. Here's a generated test script:

-- database unit test for dbo.usp_GetConfigParamValue
DECLARE @RC INT,
@ApplicationName VARCHAR (30),
@ParameterName VARCHAR (50),
@ParameterTypeName VARCHAR (30),
@ParameterValueVar VARCHAR (255)
SELECT @RC = 0,
@ApplicationName = NULL,
@ParameterName = NULL,
@ParameterTypeName = NULL
EXEC @RC = [dbo].[usp_GetConfigParamValue] @ApplicationName , 
@ParameterName , @ParameterTypeName , @ParameterValueVar OUTPUT
SELECT RC=@RC,
ParameterValueVar=@ParameterValueVar

When the tool finishes generating the test scripts, you can edit them. You can use them as supplied and simply modify the parameter values so that the procedure will return meaningful values.

In addition to letting the tool generate your code, you can simply add a blank test and write your own TSQL to call the procedure you're interested in testing.

Once you've got the scripts configured, you'll want to start creating Test Conditions, basically asserts. The tool supplies a number of test conditions right out of the box. Each of these tests may have parameters that can be set to determine whether or not the test has been passed for the procedure you're running. For example, the test condition Scalar Value, which tests if the results contained a particular value, can be configured for a particular column, row, result set (meaning there can be multiple result sets from the test), null setting and, finally, the expected value. Here are the conditions included out of the box.

  • Empty Result Set: Requires that the procedure tested return nothing
  • Execution Time: Minimum time for execution of the given test
  • Inconclusive: The default test. Simply a place holder to remind you to set your test conditions
  • Not Empty Result Set: Requires that the procedure tested return something
  • Row Count: Specify the number of rows anticipated by the test
  • Scalar Value: For a given column and row, does the value match the test value
  • A project manages a group of tests. If you want to divide your tests into multiple run groups, you'd add multiple test projects to your solution. Each test run can take advantage of a test initialize script and a test clean up script. then, each test within the run can have its own pre- and post-test scripts. These allow you to prepare data or clean up after an insert, reseed identity values, etc., all as an automated part of the testing process.

    It's important to use the scripts if you need to clean up after a test because the tests are each run under their own user context and within their own transaction. This is so that you can run the tests as a particular user, mimicking their security, etc.

    Running a Test

    Once you're created all your tests and set up test conditions for each of them, you can run the tests. The easiest way to start running tests is to open the Test View window. From there you can run all the tests within a project or pick out individual tests to run. Simply click the green run arrow after selecting some or all tests. The tool will open the Test Results window (all these windows are dockable of course). This will show the results of your tests. You'll get either Failed, Inconclusive (which means you haven't edited your test conditions) or Passed. The goal of course is to get a full set of green lights.

    If you have a failed condition an error message will show the basic failure. You can double click on the result row to open a window with the detail of the failure.

    You can also choose to rerun a test, or tests, directly from Test Results window. Click the check boxes to include or exclude certain tests and then hit the green run arrow again.

    Finally, you can run tests from the command line and incorporate them into builds or deployments from the rest of the tool. This allows you to develop the database in mostly the same test driven manner as the developers will be working on their code. The command line utility can be configured to run with MSBuild.

    A little beyond the basics

    Test conditions are your principal tool when running tests. However, you can also add TSQL assert statements. This may sound complicated, but what you're actually doing is setting up a condition within your TSQL statement. This can be as simple as simply checking the OUTPUT parameter from the query defined above:

    IF @ParameterValueVar = 42
    BEGIN
    RAISERROR('Output answered ultimate question without defining question",18,1)
    END 
    

    Yes, a simple RAISERROR statement will show your test as a failure. This can be a very useful mechanism for extending your test conditions for several reasons. First, familiarity. We all work with TSQL pretty extensively and we're probably much more able to create custom tests that we can call from user defined functions or stored procedures within the test code. Or, you may already have some test code, say that data compare routine I used in my old test code, that you want to implement within your new testing mechanism. Finally, it's going to be faster to test the results within the script than to return those results to the client for testing.

    Since the conditions are simply DLL's, there is a mechanism for actually writing your own test conditions. You may need more complicated logic than is available within TSQL, or you want to work largely within a single environment for the tests, or you want to be able to control the tests within this single managed code base and don't wish to have to deploy additional database code in order to run tests. I've actually written one for comparing data sets. I'll put that code and an explanation into another article.

    Conclusions

    This tool is well on its way to becoming one of the most important within our team. Our ability to integrate directly with the developers and offer them a high degree of certainty that our adjustments to the stored procedures won't impact their code gives us a great deal of respect from their teams. Yes, implementing this tool requires a fairly fundamental shift in how you develop your database and database code, but it's worth it. Microsoft has moved database development much further down the road to the ability to refactor databases.

    Rate

    3 (2)

    You rated this post out of 5. Change rating

    Share

    Share

    Rate

    3 (2)

    You rated this post out of 5. Change rating