• MJ-1115929 (8/20/2014)


    Dear Experts,

    We have a new project that requires some modeling ,design changes and performance tuning , as of now I am Administration side which I have only basic idea on the above said so I need some help, advices and suggestions.

    1) We have a huge db where the data is for historic and in addition we run reports on old records that are no longer actively being updated ,So how we can improve the performance of both reads and writes in this circumstance (100gib data) as of now there is no partition of tables are there and they are in single filegroup , Please provide some best practices and ideas on the same

    In general, the best thing to do is break apart the OLTP aspects of the system from the reporting aspects of the system. This way you can optimize the design for the OLTP and optimize the design for reporting. Generally this is done in two different databases, not the same database with different partitions. Other than that, all you can really do is design your indexes appropriately, especially being very careful about the choice of your clustered index.

    2) In which situation we De-normalize a database?

    I am very adverse to denormalizing databases for performance. Most of the time this backfires. If a single giant table was better for performance, why would we normalize at all (other than some data integrity issues that we can resolve through coding)? Normalization actually helps performance. Here's just one example[/url].

    3) When De-normalizing, how can we ensure data consistency?

    You can't, and there's the problem. You're completely dependent on the application getting it right.

    3) Can we De-normalize the above tables for better read/write ?

    Not seeing the tables, who knows, possibly, but possibly not. Again, denormalization is frequently a crutch for people who have either made really poor design choices, especially on the clustered index, or are writing bad T-SQL, especially in their JOIN criteria. So they throw everything into one giant table and suddenly it's "faster" but not as fast as it really could be.

    4)When there is change in production db with a new release which has database changes, How do you ensure data integrity and minimize downtime when deploying ? also what are best practices for the same.

    Thanks

    MJ

    Deployments are a giant process. The best single resource discussing this topic that I know of currently can be found here.

    Since you've never done this type of work before, I strongly suggest you hire a consultant with some experience to walk you through it. There are so many possible mistakes to be made. You want to minimize those as much as possible.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning