SQLServerCentral Editorial

Poor Software Testing


Due to a email sending issue, this editorial from last week is being resent in the newsletter.

I am a big advocate of testing your code, including your database code. I like repeatable testing, especially unit testing. I think this has contributed to the increase in quality over the last ten years as more developers have incorporated unit testing into their work. That, along with the increase in standard frameworks, seems to have resulted in less crashes and instability in much of the software I used today.

That being said, the client ultimately decides if software works as expected. Certainly there can be logic errors, but there also could be errors in how specifications and requests are interpreted. This is one reason we need humans to do some QA testing and clients to ensure there is some user acceptance review.

Apparently that didn't happen with one election machine. There is a hash that is supposed to be used to verify that the correct version of software is installed. However, if that reference hash isn't there, the machine still reports things are fine. Certainly an issue that is a problem, though not necessarily one that users would detect. We would expect someone that purchases, updates, or administers these machines to check for the correct version.

That isn't an issue here because the acceptance testing was done by the vendor. While I am a big advocate of developers checking their work, there needs to be independent evaluation by a CI process, and there ought to be some QA review by another group. Certainly a client ought to be able to double check the software as well, and some client should have done acceptance testing here.

As the world moves more towards DevOps type software development, we need better testing, and that likely needs to include some independent testing outside of the developers and testers. I certainly could see the need for clients to submit some tests and get some verification that those tests passed. Automated CI/CD systems can do this, and provide detailed logs of what happened.

Ultimately we may still have bugs, either because we don't have enough testing, or we don't quite write the code that does what client expects. We may also have performance issues, but that's another testing issue.

We can get better, but we have to work to do so, learning from our mistakes, ensuring we are always improving testing, and listening to feedback.