• TomThomson (5/23/2015)


    That's a great editorial, and I totally agree with it. If everyone based design and build of systems only on verified (or at least adequately tested) hypotheses we would suffer a lot less from bugs and other unintended behaviour, the cost of software/system development and support would be significantly reduced, and it is really crazy to push out assertions without any supporting evidence, which ultimately has to be text and measurement based. Unfortunately, while many developers, dbas, and engineers understand that, there are a lot of managers out there who either have forgotten it or never understood it and drive their staff into forgetting it.

    But the scientific method doesn't generally lead to proofs - the hypotheses are falsifiable by test (or at least they were until string theorists got hold of physics) but a failure to falsify merely means that the predictions the hypothesis leads to are good for certain experiments and only within the limits of accuracy of measurement, not that it's some sort of absolute truth. Many experiments may be needed to give a decent degree of confidence, and even after that we may hit circumstances where the hypothesis turns out not to fit - for example Newtonian mechanics survived expirement for centuries, but is now known to be false (although it's still valid for many purposes) because it doesn't work for things on the atomic or smaller scale and it doesn't work on the astronomical scale (gets the orbit of Mercury wrong, for example).

    Yes, I prefer to prove things - I'm still a mathematician at heart; and yes, I can prove some statements about performance because they may be simple statements of pure mathematics. But there are far more performance statements where all I can prove is that there's a decent probability that a particluar algrithm will give better performance than another, because which performs best often depends on what data is fed to them and what environment they run in, and in some cases I can't even do that to any useful extent (in which case I'd prefer not to produce any software for that problem). But I can formulate hypotheses and test them and publish the hypotheses and the tests even though by doing that I get no proof of correctness, only either a better degree of confidence or a proof of incorrectness - which is of course what happens in real science (as opposed to in maths and logic and string theory) and is what the scientific method is (or perhaps used to be) all about.

    And sometimes I want to throw some software together in order to get some measurements in order to formulate some hypotheses - experimentation to allow hypotheses to be formulated is just as much part of the scientific method as is experiment to test, and is clearly applicable in the computing world. "I did this and got these results and can't make head or tail of them" is a perfectly reasonable thing to say in a scientific paper too, and if computing and/or software engineering are science based that sort of paper must be allowed too, not just papers describing hypotheses and the experiments carried out to check them.

    Well stated and true. That's what I meant when you have to reprove experiments that have been cited rather than just take their word for it. The experiment has to be setup correctly and not just for the question at hand.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)