I don't understand why any discussion about surrogate vs natural keys always degenerates into an idealogical 'flame-war'. At least this particular thread has remained fairly tame.
So a further 2c...
What we're in danger of forgetting in these 'enthusiatic' discussions, is that databases don't exist for their own benefit, and that the people who pay us (directly or indirectly) to create said databases (i.e. end users) don't care about database theory (no matter how much we try and convince them that they should).
Then, in between the end users and our databases is a 'abstraction layer' known as the 'application software developer', these good people have a moral obligation to understand basic database theory, but also have the task of converting 'Data' into 'Information' for their end users.
To me, the *prime* obligation of the database designer is to design databases that a) accurately model the end users' data requirements b) help our developers to be effective in producing applications that their end-users find user friendly, and c) help ourselves and our successors in database maintenance, by making our designs sensible, reliable, simple but well normalized, as 'future-proof' as we can practically make them and as 'self documenting' as possible.
If we forget these, we are failing in the duty of care we have for our clients and our co-workers.
To this end, as someone who operates regularly at all 3 levels (user, developer and db designer), and usually working on someone else's long neglected application, my view is that it doesn't matter so much which methodology (surrogate vs natural) you choose to use in your database design, as long as you normalize your database structure properly, and apply your preferred methodology consistently. Do that, and I can work with what you give me.
If your choice of key (be it natural or surrogate) for any given table makes it difficult for your software development team to deliver good information to their end-user, then you have failed.
For example, if your product development team need to use an application generation tool that works better with surrogate keys than with natural keys, then to me it would make sense to implement surrogate keys. To not do so just because it would 'corrupt' your database structure would seem to me to be counter-productive.
I've seen databases that work using both techniques, and I can't say that I have a really good argument that one is better than the other.
Although I *do* have a previously stated preference for surrogates, with a recent experience of migrating a client's data from an old accounting package to a new ERP package highlighting why this is - I would have saved an awful lot of time if both systems used surrogate keys instead of slightly different sizes of 'natural' key, that my client decided they wanted to keep. But that's a different story.
[EDIT] Just in case someone misses this in my points: My main concern here is about 'practical implementation' - a theoretical data model should always have relationships based on 'natural' attributes of the relevant entities.
[EDIT] Took out some unneeded, distracting 'rhetoric' (Thanks David)
P.S. my 'rhetoric' was trying (badly) to point out that many people fall into the trap of equating 'surrogate vs natural keys' with 'normalization', as if they are the same discussion. They are not, IMHO; quality of normalization has little to do with whether surrogate or natural keys are used. I've seen both bad and good designs from both 'camps'. Since I can't figure out where to put this point within my original text, I've added it as a postscript.