The task of the defensive SQL Server programmer is to produce code that behaves consistently and predictably in cases of unexpected usage, code that is resilient to minor changes to the underlying schema objects, or to database settings, and as far as possible to SQL Server upgrades. All developers aspire to build SQL code that will last beyond tomorrow lunchtime, but just how tight are the shackles that restrict what SQL we can use, in the name of defensive programming?
Say a business application requires a critical calculation to be performed on a table holding several million rows of data. You've developed a SQL routine, tested it under as many use cases as you can conceive, and it is consistent, reliable and fast. However, the technique used for the calculation is not officially supported by Microsoft. Does the defensive programming philosophy dictate that such techniques should be avoided completely? After all, in a future version of SQL Server, or when a service pack is issued, the technique is not guaranteed to continue working. This would struggle to conform to most people's definition of "robust code".
Nevertheless, the business need remains and is pressing, and alternative solutions are failing to meet performance targets. Surely, in such cases, prohibiting a viable technique on the grounds that it may or may not break in a future version is counter-productive. However, there is a real onus on the defensive programmer to put safeguards in place to ensure the long-term reliability of such code, should conditions change.
In some industries defensive programming is, by necessity, taken to extremes. When, for example, writing software for an aircraft flight controller, you need to be as sure as you can possibly be that it will respond correctly, whatever the conditions. A single critical calculation will be made by not one but several trusted algorithms, simultaneously, the idea being that all must agree on the correct answer. Complex model checking tools will be used to test all possible responses to every possible input.
While such measures are too extreme for the average SQL programmer, there is no reason why they can't at least take a tip from the 13th Century and implement "double-entry bookkeeping" for all critical calculations. This age-old technique, where "every transaction or event impacts at least two different accounts" is easy to implement but I wonder how common it is? The idea is simply to write comparative tests into the code; you have two methods side-side-side that perform exactly the same calculation, on a small subset of the data. The two answers are compared and an alert is raised immediately if they differ. At this point, with the code fully tested, documented, and with a safeguard in place, I believe the defensive programmer has met his obligations.
Of course, as Alex Kuznetsov describes in his forthcoming book, Defensive Programming with SQL Server, code can and should be made more resilient by following best practices and by continuously revisiting and retesting your code and your assumptions. However, this shouldn't stop the defensive programmer from using proven techniques to which Microsoft prefers not to give its stamp of approval.