I don't agree with the content of this article at all. It's a cute idea, and will probably reduce the size of your database schema -- at the expense of performance and data maintainability. The database will be forced to perform needless scans of the "store key" table every time any type of operation is done. The author hasn't discussed what happens when the table grows to millions or billions of rows. How will it be maintained? Will you delete items from it as they age? How will items in the global store key table be searched? What happens when you're storing redundant names, redundant addresses, redundant names of departments, redundant product codes, and twenty other redundant types in the table and suddenly your new business requirement is to write a stored procedure to search for customers by address? Do you want your database scanning all of those extra rows every time? Or needlessly joining the entire customers table to the "store key" table first to reduce the text scanning workload a bit? And what happens when you have two very large entity types stored in the table and you need to boost performance of searching them both at the same time? Good luck partitioning the data onto different physical discs when you have no identifier of entity type on the table.
The author discusses "normalization" but fails to realize that the idea of a global "store key table" violates the very important concept of seperation of entity types within the database.
Edited by - amachanic on 12/04/2003 10:36:48 AM
Edited by - amachanic on 12/04/2003 10:38:07 AM
--
Adam Machanic
whoisactive