• I think the sum(*) statement in the SQL to create IndexCaptureSummary should probably be count(*). Also, in the report sql statement, the table name should be CompileIndexesShowContig according to the sql used to create the table earlier on...

    I thought this was an interesting concept and so I did the study on my database and I have to say that it does appear that the results as stated in the article can be misleading. One of our most heavily used indexes is on our products table which has large records (varchar() text descriptions,etc), but only 250 rows and the index used is clustered.

    I'm not sure that the products table is our IO hog, but it showed up as 57% of flow on 17% of scans using this technique. Most of our queries don't return * from products but only select columns and thus (scancount * AverageRecordSize) may not be a good indication of the IO demand created by those scans. It's also frequently used and small enough overall that I'm 99% sure it's held in memory and hurting disk IO.

    Since the profiler event Scan:Start just triggers when an index is used (correct?), our products primary key is probably also scanned when using the products table in joins where records aren't even being retreived (necessarily). Not too sure on this point...

    Thus, I think the data collection techniques are good, but I wonder if there is a better formulation of the summary report that would factor in the number of rows in the table and/or other data from profiler that would give a more accurate picture of IO demands. Otherwise, great work!