Errors that slip past all testing.

  • Why is it that after extensive testing of a query, stored procedure, DTS package, job, etc. we find no errors and move something to production and the next thing we know someone finds an error?

    Or does this just happen to me?

    Robert Marda

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • The problem I always found from the developer stand point is that we understand how it should work and tend to not find the errors because we don't throw curve bals at it. I next find that in testing a lot of times people are given a list of things to try with step by step process and they find a few bugs (usually because they did not follow the steps). Before I do any extensive testing on the method itself, I like to grab a few folks and tell them to play with it and see if they can figure it out (that's what most will do in production anyway to see what they can get away with). When they come to me and ask me how to do something I tell them that will be next phase testing to just do what they feel is right or try to do what they want and see if it will let them. A lot of people in management think this kind of testing is pointless as they feel it cannot be tracked properly and that it doesn't go with the proper workflow, but I have found my biggest bugs that way.

  • Antares hit the nail on the head IMO.

    I was on the other side of this the other day (the goofy user who found a bug).

    We're bringing a Cognos/Oracle based BI solution in house. I work with the vendor's DBA to get the database all set up just right tweaked just like it is running in many other installations.They then ask me to test our setup by running some of the reports. I merrily go about clicking buttons and looking at pretty reports that mean nothing to me as I'm new to the company and still in the steep learning curve of the nuances of the business. Then I click a certain report and it prompts for a value : "what is your gobbledygook goober (that's how my brain interpreted it) level for this report" Since I have no idea what a gobbledygook goober level should be I pick a random number and settle on 10. Click. $(*&#@$ ERROR!!! Oracle cannot allocate sgements for tablespace 578430!!.

    Bottom line - the temp table wasn't big enough to do the sort required because I had entered an entirely unrealistic number. This product had been in production all over the place and no one had ever noticed this problem because until I came along everyone using the product knew that a gobbledygook goober level just had to be somewhere in the 100's of thousands. It was fixed in a few minutes by changing the code to first validate that a value entered would be in the correct range.

    Moral : the worst possible tester is you (though you should always, always test it yourself). The best tester is a random person who has no idea what it is supposed to do.

    That said, this happens to me way more than I'd like to admit. What I've noticed though is that if you keep track of the errors (I keep a notebook) that come up over time you'll notice that the same basic types of errors happen over and over. Then you can start coding defensively for these.

    JasonL


    JasonL

  • Especially for applications I like to test on a clean machine logged is a real user. Too many times something runs fine on a developer machine but not production, due to either dll issues or permission problems (disk or SQL)..sometimes both. I also like to place apps in limited production with a both a few totally new users and afew power users first. Power users being a normal user who likes to work fast, aggressively, creatively, etc. Not only do you find the odd bug or two but you often find problems in the UI design.

    Andy

  • I do my best to keep track of errors I know can occur and test for them during my testing.

    Some times, I think part of the problem is (at least for our SQL dept) is that we're not given enough time to test even though we state we need more time and that we have no testing/QA dept. It is up to us developers to test and sometimes we can get a few other people in the company to test.

    I agree that a good tester is someone unfamiliar with what something can and can't do and of course most everyone at our company knows how our programs work. Another good tester is someone who intentionally puts in junk data to see if he/she can get an error.

    Robert Marda

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • quote:


    Some times, I think part of the problem is (at least for our SQL dept) is that we're not given enough time to test even though we state we need more time and that we have no testing/QA dept.


    When Murphy strikes, he strikes hard. We try our best to test to in as close to production environment as possible, but even with that, there are always at least minor differences. Hence the importance of the rollback plan for implementation.

    But I feel your pain as once we're in production, we're really stuck. I agree with Antares' approach in that the "see if you can break it" outside of a regulated test plan should also occur. Users will click where no developer expected them to click and therefore it is important to have some just go and "play."

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • As applications get more complex and grow, it is impossible to thoroughly test them.

    Let me repeat that. It is impossible. With a few hundred lines of code containing a few dozen branches, WAY less than most applications, there will be hundreds of millions os possible paths through the code, plus then factor in all combinations of inputs. It can't be done.

    What you should do is have a methodology that sets down a series of things to test. You should test that things work as expected AND test that you cannot break things. As mentioned above, the worst tester (who cares) is you. You will have problems breaking your code. You will probably not think of many possibilites, so it's best to have someone else look at your code and test it.

    Here we have a separate QA guy, who still can't test everything, but he does a good job looking at the app to see that things work (positive testing) as well as seeing that different cases that are not intended do not break the app (negative testing).

    I know I'm rambling a little, but you do the best you can, have more eyes look at it and keep things simple. And trap for errors.

    This is one of the reasons behind Open Source in that tons of eyes can look at the code and spot flaws or suggest improvements.

    Steve Jones

    steve@dkranch.net

  • I agree a good test plan and trying to break the system in test is vital. There is an old saying amoung programmers that you can make something fool proof but not idiot proof. If you connect that to Murphys Law

    from which you can take your pick at

    http://dmawww.epfl.ch/roso.mosaic/dm/murphy.html, but I refer to the one If something can go wrong... You will see that there are a lot of idiots out there

    William H. Hoover

    Louisville, Ky

    sweeper_bill@yahoo.com


    William H. Hoover
    Louisville, Ky
    sweeper_bill@yahoo.com

  • Then there's the saying that goes something like: When you finally make something idiot-proof, they just go and build a better idiot. Unfortunately, these words seem very, very true.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • Yeah that seems true everyday, but they also say "practice makes perfect" and I am telling you some of the idiots must practice regularly.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply