• djackson 22568 (5/19/2014)


    hisakimatama (5/16/2014)


    Hrm. Well, it's still rather early in my career (just under four years of being a DBA/Database programmer so far), but I'd say I've definitely failed so far, on several occasions :-P. The core failure in every case to date has simply been that I've failed at failing.

    I'd imagine it's not too different for most new programmers, but I could certainly be wrong. When I first started, I had a few screwups and mistakes, and the response I had was one of puzzlement. "How did that go so wrong? I tested it plenty! It shouldn't have done that!" and so on. Eventually, in my puzzlement, I'd stumble upon the answer.

    There is nothing at all wrong with that, assuming you did indeed test it. Asking how something went wrong is not an issue, it is correct behavior. Expecting our code to work correctly before releasing it to end users is also correct.

    Those who accept that their code will have issues frequently have far more issues because their standards are too low.

    When I write something, say to extract data for someone, I test it until I believe it is correct. At that point, I then tell the end user to expect it to fail. I ask them to prove me wrong. Most of the time they can't do that. But their attempts to break it, or to find my mistakes, generally leads to better QA than I can do myself.

    The key to what I am saying is that we should code until we believe we have it right, but we should encourage others to find our flaws that we may have missed. We should not accept our limitations and expect others to deal with it. The line between doing good work or shoddy work is hard to find, but I think most good programmers really believe their output is correct before releasing it. Shoddy programmers believe it is good enough.

    Oh, certainly! I should clarify; it wasn't so much that I tested it to the limits of the edge cases I was thinking of, I was instead simply testing it to where I thought it shouldn't break. That, of course, didn't include fun edge cases where the users would try things that I hadn't thought of, but really should have :-). Then, once it broke, I felt personally offended, with the ever-so-terrible mindset of "they shouldn't have done that anyhow! It wasn't in the intended use cases!"

    Now that I've stopped thinking in such limited terms, though, I instead test as far as I can, then throw in some random nonsense to make sure the process won't break, and on top of that, I try to plan for any weird-yet-possible cases that could happen, like a user totally ignoring that an import process actually needs something to import, or trying to import an image instead of a text file. Add in the appropriate messaging on those errors, and plan for a few more strange cases, and brace for any failures, and deploy! 😀

    - 😀