Test Coverage

  • There is a very definite skill in writing a suite of useful tests. You can't approach it as a "check the box" exercise.

    Automated testing allows a huge range of repeatable tests to be applied quickly. My first engagement with a professional tester was a humbling experience. I take pride in my work and do put a lot of effort into my code but the tester found gaps I simply hadn't thought of.

    The thing is that over time the knowledge of what is required for testing is absorbed and results in a more rich set of automated tests.

    I see UAT as separate to QA. UAT should be about the useability of a system rather than a "flush out the bugs".

  • David.Poole (11/14/2013)


    ...I see UAT as separate to QA. UAT should be about the useability of a system rather than a "flush out the bugs".

    I agree but feel that the reality is that other defects will be noted due to occurrences of such things as missed or misunderstood requirements.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Gary Varga (11/14/2013)

    I agree but feel that the reality is that other defects will be noted due to occurrences of such things as missed or misunderstood requirements.

    NSR = None Stated Requirements

  • Two areas we struggle with in testing. The first is that the data gets changed with testing and is difficult to get back to retest. Second is that there is no one better to test than the end user who doesn't know what is expected and seems to find all the right kinds of input to destroy the programmer's expectations. That doesn't happen for a unit test but for a more "releasable" condition of the product.

  • There are so many dimensions to "realistic" test data not least that data has natural hot spots. It's difficult to come up with test data that truly simulate the production environment.

    In terms of getting to a repeatable state we are looking to rebuild a baseline after a successful release rather than re-running an every increasing number of scripts with an ever increasing rebuild time.

    The discipline we try to adopt is to look at what it is we are trying to achieve and to take a step back and ask what we need to be able to achieve it rather than to bash our heads against a brick wall with a particular solution. In effect OODA - Observe, Orient, Decide, Act.

  • Still waiting for the "automation" of building the test script. The automated running is not rocket science just a bit of work to setup in the first place.

  • David.Poole (12/20/2016)


    There are so many dimensions to "realistic" test data not least that data has natural hot spots. It's difficult to come up with test data that truly simulate the production environment.

    In terms of getting to a repeatable state we are looking to rebuild a baseline after a successful release rather than re-running an every increasing number of scripts with an ever increasing rebuild time.

    The discipline we try to adopt is to look at what it is we are trying to achieve and to take a step back and ask what we need to be able to achieve it rather than to bash our heads against a brick wall with a particular solution. In effect OODA - Observe, Orient, Decide, Act.

    It's difficult at first. However, this is an ongoing process. As you change the system, find issues, etc, add test data to cover more cases. Don't try to be perfect, adn use the OODA process as you build test data.

  • I had never heard of the OODA loop. Like a lot of thinking on agility, just like design patterns, it is more a recognition and formalisation of something some people already do.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

Viewing 8 posts - 16 through 22 (of 22 total)

You must be logged in to reply to this topic. Login to reply