• I'm not a big fan of de-normalization in OLTP databases. If I need to run queries I'll run them against a structure that's is optimized for queries such as a dimensional model or a heavily indexed Operational Data Store. I think an OLTP database exists for one reason, to capture (and validate) transactional data as quickly as possible. The emphasis is as much on write performance as it is on read performance. Anything that gets in the way of that mission such as locking associated with read-only type queries should be consciously reduced if not completely eliminated.

    I've actually found that it's easier and faster to develop the OLTP model and the ODS in parallel. It also allows you to build a data source for your down-stream data warehouse that can be accessed without impacting the OLTP database. This is, after all, why so many companies use transactional replication, i.e. in order to populate their ODS for querying/reporting purposes.

    One of the reasons I don't like de-normalization in the OLTP database is that the business intent of the logical data model can become obfuscated by the perceived need for short-cuts in the physical data model. There is a reason why each entity exists in the normalized data model. Referential integrity demonstrates how those entities are related to each other. When those relationships are eliminated and attributes are scattered redundantly around the model the reasoning behind those relationships is no longer clear. A normalized data model is very easy to "read". Once de-normalization occurs a level of unnecessary complexity is added.

    There was a time when the cost of physical referential integrity was quite high and locking algorithms were relatively primitive so de-normalization was a means of overcoming the shortcomings of the engine. That time is past.

    "Beliefs" get in the way of learning.