• wbrianwhite (7/31/2010)


    http://blogs.msdn.com/b/sqlazure/archive/2010/07/27/10043069.aspx

    Just a note, non-clustered indexes (what you make when you do a covered index) reduce the performance of your writes. This is because on insertion or updates, the indexes need to be updated. So you need to balance your covered index creation with the ratio of reads and writes to your database. Databases with a disproportion amount of reads to writes gain more performance from covered indexes.

    Indexes DO affect your write performance. It is not theoretical. You have to seek a balance between reducing I/O by adding indexes and reducing insert efficiency by adding indexes. If you add a new covering index for every bad query it will have significant effects. If you can improve the query without changing the indexes, it is a win/win. In my shop the DBAs monitor poor-performing queries and make recommendations to the devs for re-writes and/or indexes. Some common problems in queries that can be fixed in the code: non-SARGable clauses, writing a where clause that tests where a or b instead of writing two selects with simple where clauses that are unioned together, cursor usage, not specifying a known column in the where clause when that would let you take advantage of a clustered index.

    As previously stated, the cost of adding and updating the indexes is actually a lot smaller than it's made out to be. This is the same sort of thing spouted by people that say you should always rebuild the indexes when generally updating statistics is all thats required.

    More often than not, the problem with a poorly performing query are the cases you've specified, but what if you're dealing with a vendor application where code changes could take months, if they ever happen at all?

    Better to wear the impact of applying an index rather than leaving the query to perform poorly.

    --------------------
    Colt 45 - the original point and click interface