sys.dm_db_index_operational_stats usage question

  • Hi All, based on BOL SQL2008R2. (BTW, I checked MSDN online and SQL2014 reads the same)

    The data returned by sys.dm_db_index_operational_stats exists only as long as the metadata cache object that represents the heap or index is available. This data is neither persistent nor transactionally consistent. This means you cannot use these counters to determine whether an index has been used or not, or when the index was last used. For information about this, see sys.dm_db_index_usage_stats (Transact-SQL).

    From this it seems obvious that to properly interpret the data we need to take deltas of two readings at least a few minutes apart (perhaps once an hour for a day or so depending on activity).

    However, the phrase "exists only as long as the metadata cache object that represents the heap or index is available." confuses me. It is worded very differently from other accumulating DMVs like Virtual IO Stats or Index Usage which are pretty clear about accumulating from the last restart (with some exceptions).

    In the next paragraph in BOL --

    The values for each column are set to zero whenever the metadata for the heap or index is brought into the metadata cache and statistics are accumulated until the cache object is removed from the metadata cache. Therefore, an active heap or index will likely always have its metadata in the cache, and the cumulative counts may reflect activity since the instance of SQL Server was last started. The metadata for a less active heap or index will move in and out of the cache as it is used.

    It seems to say that the data in this DMV may be purged at anytime just like the Data Page Cache or Plan Cache.

    If that is true how can I be sure that a second reading from the DMV is valid and directly related to a prior reading? I don't see any column that indicates when the row was created, or how long it has been in use, or when it was last 'reloaded'.

    So, since my server has been on line for > 30 days, when I read data for a Table/Index the first time I have no real basis for evaluating the numbers (other than 0).

    When I read the data an hour later I still have no guaranteed way to know the two sets of data are directly related. Sure, I can look at the data and apply a "sanity test" but I would like to set up an automated job to collect some baseline data. I don't think I can write the Sanity Test in SQL 🙂

    So what's the deal? Do I take readings an hour apart and just assume the data has not been purged and reloaded in the meantime?

    Thanks for any suggestions or guidance.

Viewing 0 posts

You must be logged in to reply to this topic. Login to reply