• Matt Miller (#4) (8/24/2015)


    Eric M Russell (8/24/2015)


    When I write unit tests, it gets run in both development and QA. Each unit test is a script that maps to a specific functional requirement. A test plan lists the sequence that the scritps are to be run, including any prerequisite (ie: a client named 'ABC' should exist) and expected outcome (the script should return 3 orders totaling $12,550).

    unittest_2-1.sql

    unittest_2-2.sql

    ...

    I run the test plan in development, and someone in QA runs the same test plan in QA in parallel. The scripts that query expected outcomes should have matching results between the two environments.

    +1

    In some cases You might need to "formally" create a mapping document so that your widget's unit test can be assigned to a specific sub-function (where you've "assigned" part of a larger function to that particular widget, and if that widget perform its part of the job - then the whole job falls apart.) This would even work on all of the non-functional aspects as well (e.g. in order to provide all billing info within 10 seconds, I need the database query to retrieve the billing detail in .5 sec).

    But either way - if you "know" this data is good and you get a bad outcome on the test, then either a. the code is bad, or b. the test is wrong.

    Yes, I typically have the scripts print out the hh:mm:ss duration and statistics profile text plan too. The process of coordinating with business on a valid test plan also insures there are no missed requirements.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho