• I have one problem with the calculations.  Overall it's a good article, and more metrics are always useful, if they're accurate.

    He lists the calculation for data flow as: Data Flow = Size of Data Element * Number of times it is accessed  -- and uses AverageRecordSize to represent Size of Data Element

    But there's a big piece missing, namely the number of rows.  A table scan or index scan will retrieve each row of each table/index.  So we should be multiplying by total table size, i.e. AverageRecordSize * NumberOfRows, or better yet, NumberOfPages.

    So, a wide table with a few rows will come across differently than a narrow table with millions of rows, even if they take up the same number of pages.  I think it's best to think of table or index data flow in terms of pages, since that's the memory allocation chunk size and basis of disk retrieval.

     

    Dylan Peters
    SQL Server DBA