Don't Use Code Coverage

  • Comments posted to this topic are about the item Don't Use Code Coverage

  • A real code coverage metric has a useful purpose If the value is too low, then the set of unit tests are not going to be as helpful for unit testing as they should be. As a manager, the actual value isn't as important as the relative value, Low scores can indicate poor tests or overly complex T-SQL. It can also indicate T-SQL with many error paths that are not being tested.

    A code coverage metric is also useful in the context of DevOps and automated testing. If the code coverage metric drops on a new release, its probably an indication that there is new, untested code. 

    A code coverage metric is one tool. Fixating on the value to be achieved provides little benefit. Focusing on why it varies from the expected for newly introduced artifacts and for regression testing can catch problems before they occur. A high value won't guarantee everything is functionally correct.

  • I wholeheartedly agree with Brad on this. It's a tool that provides a smell that should be further investigated. The code coverage "score" is arbitrary, but using it to baseline what is normal and good should help flag missing tests should the figure decrease. 

    Also, SQL Cover is free and open source, so there's absolutely no harm in using it, particularly if already using a automated build / CI tool.

  • T-SQL is a high level declarative language that generally doesn't involve a lot of branching (although every organization has a handful of 2,000+ liners). So, I'm guessing that Code Coverage in the context of T-SQL would be testing your code with the goal of insuring that the resulting execution plan(s) perform well under various data-set sizes and input parameters.

    That's what I do when I'm Unit Testing a new stored procedure, sometimes creating a mockup datasets that are 10x the expected size of the data in production after one year, or auto-generating stored procedure calls using 1000 different variations of parameters.

    Also, I'm thinking that a traditional Regression Test, perhaps something like a comprehensive test plan or replaying a production SQL trace in QA to simulate actual user activity, is one way to achieve Code Coverage for the SQL back-end of a database application.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • As multiple folks have mentioned before, focusing solely on mechanical code coverage will most often yield a bad unit test setup.  But then again that's true of pretty much true of any principle or metric being used without considering what the metric tells you or intended for. I prefer a combination of code coverage and functional use case coverage , with additional focus or emphasis on heaviliy used areas if you have detailed usage metrics.  That tends to yield a fairly decent testing package without requiring the mother of all test packages or other anti-patterns.

    That said - I can't say I agree with not setting up tests for code branches that "aren't being used" at all. It's one thing if the code cannot ever be accessed or triggered unless someone hacks the application, but if the code is a feature on the UI or can easily be accessed, then it's getting a test.  Besides - we tend to develop using TDD, so tests are built out when new functionality is introduced (i.e. before you know it actually will be used).  If you're deploying code, you had better know this won't break things so not setting up basic testing frankly is not an option.

    By the way I do agree with campaigning to remove features if no one plans to use them, but until it IS removed, what that code does and how it operates is on us.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • I really appreciate this article, Steve. I'm living this now. I was one of those who introduced the team I'm on to unit testing. As a corollary that brought along code coverage. As far as I know, no other team at the state agency I work at, even uses unit tests, let alone code coverage. No one in this state agency has ever done unit testing and of course code coverage. I've written about this before but this job is the first one where I even used unit testing. I learned about it before coming onboard. Its been good, but about 6 months into it I noticed that we were writing a bunch of unhelpful unit tests just so that we could maximize code coverage. I didn't realize it at the time, but I was writing unit tests to test the most basic of things, that really don't need testing. (e.g.: assigning a property of a POCO actually returned the value that you assigned. As if suddenly, the .NET framework would stop working.)

     Well, we're all learning something none of us had experience at before. I appreciate the links describing the vain pursuit of approximating complete code coverage. I'll bring it to my team.

    Rod

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply