• I think it's a good question, in that it's extremely obvous from the options given what the required answers are - so much so in fact that it's difficult to understand how 14% came up with wrong answers - and the answers are a sound theoretical response to the question.

    But in the real world there are often considerations that tend to lead to different answers, and I'm surprised that out of nearly a thousand people who have answered none have taken a serious stab at raising those considerations.

    On the database side, the page referenced makes the comment that one should normalise first and then denormalise to make it work; why denormalise (introduce redundncies in the data)? Answer: so that it will be possible to obtain answers in an acceptable time / with acceptable use of CPU resources. This is pretty common, not a strange edge case. And how often do we see a UNIQUE constraint or index where none of the affected columns is nullable? Every time we see that, we know that there is deliberately introduced redundancy in the schema (but maybe this one is a rare edge case).

    Also on data redundancy, what if what I am doing is logging messages? This can introduce redundancy for all sorts of reasons - for example the messages can be those passing between systems that rely on a "tell me thrice" rule to assist in sanity checks, they can be passing through an extremely noisy channel where there is no back channel to provide acknowledgement or flow control so that each message is not only redundantly coded but transmitted several times, and even with clean channels in both directions the message collection will often be redundant and a bulk redundancy-elimination scheme to sort out common substrings across different messages will be a performance nightmare if it is designed to eliminate all redundancy.

    On modularity, what if I need to use macros or in-line functions to obtain performance? Now the same code appears in many places in the delivered product, but I don't lose any consistency provided my build and update system ensures that when something changes all affected object modules are included in the rebuild. It can be an awful pain doing it manually without an automatic rebuild system, of course, but not doing it can mean unacceptable delays instead of acceptable response times for end users. The overhead of context switching between separate modules can be crippling.

    edit: correct English

    Tom