• Roger Fleig (11/13/2008)


    A really strong plus is if the wiki format supports revision history, style guidelines and annotations like wikipedia. If it has true multi-database sql language support then that is a real plus too. Let's also cheer for open source -- thus allowing you to customize it in your own environment.

    Hello Roger. Yes, FitNesse (the main testing framework that DbFit runs in), is a simple wiki that supports versioning and basic markup. It is not nearly as extensive as the MediaWiki software that WikiPedia runs on, but the end goal here is not publishing articles. Yes, DbFit does support more databases than just SQL Server. It supports Oracle and MySQL and someday soon will support PostgreSQL.

    Developers will write great unit tests if they have a low overhead tool. IMHO, this means adopting something that runs in or near the IDE they already use. Visual Studio, NUnit and JUnit give you this. A wiki page will be very different and may be an inhibitor for that reason.

    I do agree that sticking within the SSMS IDE would be ideal, but I (and we here at ChannelAdvisor) have found the separation between SSMS and DbFit to not be a hindrance. We have instances of DbFit running on our local machines and we have centralized FitNesse servers that combine both the application level integration tests and the DbFit tests.

    Code re-use looks hard here. I see some signs of a sort of "#include" or "import" but re-using wiki-stlye syntax seems a little hard to me. Also it duplicates the functionality of existing programming languages and environments. Without proper code-reuse you end up with copy/paste code and lack of abstractions where you really need them. Developers may do the same thing 5 different ways. This hurts you when you want to make disruptive changes.

    I am not sure that this is as difficult or as much of an issue that you seem to feel it is. The include functionality allows for whatever common set up you need. Outside of that, the idea is to test low level pieces so I am not sure how much code reuse, no matter how easy or hard, would be a part of this type of testing. These tests can be very isolated and is not the same situation as when building an application and needing consistency in the business layer.

    Similar to the last point, a very common operation on test code is to make updates to introduce a new dimension, variable or behavior. I would have some concerns about using a wiki for making such updates. Your bulk search/replace options are very limited since all of your code is locked up in wiki pages. Others might have a better feel for these options than I in this case.

    I am not sure I see this as an issue. Any number of tests can be performed per page and any number page pages can exist. Making slight changes would be easy if the structure of testing them was planned ahead of time.

    Most importantly, though, I don't see a very rich way to programmatically calculate results. I like to use a "oracle" or a "model" that I trust. Worst case you can persist the results (like this solution does.) But saving your "expected results" can be problematic because of type system differences, subtle changes, "over verificaiton" -- checking more than you intended to -- and most importantly human error! One or two rows may work, but if your functionality needs to be tested at 1000 or 1,000,000 rows this is very impractical. You would then need to do tricks to capture your results and just verify rowcount or an aggregate-- a missing abstraction in this tool.

    In this situation I think the idea is to create a controlled environment of test data to begin with. If you control the inputs and then have a result set that you expect to get, then you don't need to do extra calculations because if each individual row is correct, then together they are all correct. Also, I would not test that size of data in this tool. The idea here is for functionality and not performance. Hence I would test a number of rows that would show the variety of data coming back but on a smaller scale. Testing performance of 1 million rows or something like that can be handled through a different testing mechanism. The idea here is to make sure that the logic is correct, and that does not require production level volumes of data.

    Like I said, this is a cool idea, and maybe for very limited unit testing it is great. But I would worry about betting too much on such a system and building a large body of unit tests that need to be maintained.

    I appreciate your comments but would argue that DbFit addresses specifically the issue of maintenance. That is one reason I like it so much. It encapsulates all of the functional testing without having to put that testing structure into the DB. And as your system grows these tests can be combined into Suites and easily changed to adapt to changes in the logic. Since it is a wiki it is really easy to comment and document what the test is trying to accomplish so that someone coming in much later to update it can spend less time trying to figure out what it is trying to do.

    Take care,

    Solomon....

    SQL#https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
    Sql Quantum Lifthttps://SqlQuantumLift.com/ ( company )
    Sql Quantum Leaphttps://SqlQuantumLeap.com/ ( blog )
    Info sitesCollations     •     Module Signing     •     SQLCLR