Comments posted to this topic are about the item Database testing. The ferret and the rabbit
I've worked with someone who takes out a calculator. He uses it to approach the data from a different angle. It often works.
Personally, I'd say you should always check important calculations in some "by hand" way; just because you think you programmed the software right, doesn't mean the results come out.
Special rounding and age calculation rules are common (leap day); implicit conversions can be problematic... and who else remembers the Pentium floating point bug?
The ability to deliberately try to break the system in every reasonable way... over and over and over again... is invaluable in a tester.
I'll agree with it being difficult (impossible for me) for a developer to test their own code. Having a tester that revels in finding a way to bring your code screeching to a halt is what produces great applications.
Part of every calculation code deliverable should be another program that independently calculates key subtotals and totals.
The second program can be a set of sql scripts with instructions to fill in the parameters as you go, but they should exist. They should be built at the same time.
If the two programs do not agree, it's certain that the calculation in one of them is wrong. Reconciling the differences leads to additional assurance that the main program is working correctly.
If data is being transformed from one system into another as part of the calculation, the mechanism for doing so should include an audit data structure such that one can easily write a query that shows exactly which source records were included in exactly which destination subtotals. This is invaluable for quickly identifying lost or double-counted records. When built into the initial design, it's actually faster to build the main program and the double-check program than it is to build a main program by itself - that is - if the main program is expected to work perfectly.
I think it is crucial to better applications to have dedicated testers working with the developers to find their bugs. Both working together will produce a much better application.
I have given a name to my pain...MCM SQL Server, MVP
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
I originally went to college for a technical communications degree, and IT was my fallback. I still consider writing to be my primary vocation, but IT has turned out to be where the money is.
Anyway, one of the main things I learned from my writing classes is that when it comes to editing, Rule #1 is "Love is far-sighted." That means that if you really want to make sure that what you've written is good, you need to either have someone else take a look at it or distance yourself from it by setting it aside for a day or two and then checking it yourself.
Rule #2 is that writing and editing are not identical skill sets (although they do have much in common). Someone who's a good writer might not necessarily be a good editor, and vice versa.
Phil's post shows that the same two principles hold true in the IT realm, particularly where programming and testing are concerned. I just thought it was cool that his message ties in so nicely with writing and editing. Neat post! 😎
Oh, yes, there is a lot in common with writing literature and software, and the editing and testing processes are likewise entwined. The one cannot work without the other, and they must have a mutual respect. All the famous authors I can think of relied on having a brilliant editor.
Viewing 8 posts - 1 through 7 (of 7 total)