• T.Ashish - Monday, January 15, 2018 11:31 PM

    900 column table must be having a lot of indexes and a complex primary key. What will happen to Update and Delete performance on such tables?

    Lots of activity.

    Seriously though, pretty much what you would expect. You have to write the data once to the store (heap or clustered) and then you have to do the appropriate writes to each applicable index. Any of those writes could lead to the addition of new pages or page splits, with a whole bunch more writes. You do that every single time.

    I've said it before, I'll say it again, a properly normalized database enhances performance within an RDBMS. If you really only need a flat file, use a flat file storage system or a document database, not an RDBMS. They will work better with relational data than they work with non-relational data. They will work better with a star schema than with a flat file system. The whole "flat tables are faster" myth makes me insane. It generally comes from "flat tables are faster than my horrifically bad normalization/poor choice in fact table definition." Yes, writing mediocre code runs faster than writing bad code. Well done. What if we actually write good code?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning