Statistics Performance differences

  • Hi All,

    SQL version: 10.50.6000

    We have a table with 250 million rows in Production. It takes 18 minutes to perform command: UPDATE STATISTICS schema.tablename

    After checking DBCC SHOW_STATISTICS, it had sampled about 30% of the rows, histogram steps were 183

    Looking at the # of statistics saved, there seems to be a good number more column _WA_Sys... statistics on PROD.

    The same table exists on TEST with only very little difference in row count, same number of indexes, it takes 6 minutes to run stats.

    DBCC SHOW_STATISTICS shows same sample size, but 176 histogram steps.

    The Prod storage is much faster based on benchmarks run recently, has 12 cores 64 GB RAM

    The Test storage is slower based on benchmarks, has 4 cores and 32 GB RAM

    Both systems have same database options for MDOP and CTFP.

    What are the likely causes in this drastic performance/time difference? Would a few more(10) column stats produce this much more overhead?

    I was suggesting to the developer to run smarter update stats with sampling based on rowcount and rowmodctr > 0, but we still need this mystery solved and I'm unsure of where to look further. Any help is appreciated!

    -RR

Viewing 0 posts

You must be logged in to reply to this topic. Login to reply