• skamath wrote:

    So, would you agree that if you encounter *exactly* such instances (and you seem to indicate that you do), that unless you have a time machine to travel back and pre-correct the poor design, your choices are:

    1. Surrender, stating that the database was poorly designed.

    2. Try to correct the mistake.

    If you choose 2., what is wrong with using the technique in this article? I do not recall the author suggesting that you should first design poorly and then use his technique to correct it.

    You are quite correct. Having encountered such instances, I explain to my client that while I would love to have his business writing a fix for his database, the real problem is the design and my fix will NOT eliminate any problems with the data beyond the moment it is applied. I may risk a contract doing that, but more often than not the client will ask what exactly is the problem and what is the best correction.

    There is nothing wrong with the technique in the article, though I say that with some reservation concerning locks, keys, indexes and already published reports that may have relied on the faulty data. I have also found that most cases involving duplicate rows are not nearly as clean and simple as the sample illustration and need quite a bit of stroking, especially if the offending duplicates are used as foreign keys or indexes. I'm just saying there is much more to it and in the case of the simple example, there are still pitfalls and still more efficient ways to achieve the same end.