October 7, 2013 at 6:10 am
I want to focus on tracing queries that are most dependent on really good statistics.
I've learned how to do server-side traces which seem to have a light impact on our production server. Now I'm asked to capture traces which can be replayed in our disaster-recovery environment in an attempt to find the "sweet spot" for update statistic sample rate. For years we ran them at full scan ( 100% sample ), then as the database grew we backed off to 80%. Now we want to see if we should go back to 100% which will multi-thread ( parallelize ) or lower the rate.
This database has clients of many different sizes and the application uses an ORM. Its quite relational so often does numerous joins. The business allows end users to run LIKE queries from the application, sometimes double wildcard like queries so I was leaning towards capturing those. After numerous Monday-morning performance crisis over the years, everyone is afraid to lower the sample rate, but I found several articles that indicated that until you get below 50% sample, full scan may run faster since it doesn't have to sort and use tempdb ( and runs parallel threads ).
Auto create/update stats is on as is the update asynchronous. We update stats using an algorithm that finds those needing update the most based on rows modified and size of table -- 3 hours every weeknight and several hours on Saturday -- all in addition to regular index rebuild/reorgs. Sql 2012 Enterprise 64 cores hyperthreaded with max memory at 368GB.
October 7, 2013 at 11:28 am
Thanks, I knew about full index rebuilds updating stats with fullscan. What I'm trying to figure out is how to target profiler traces that when replayed will be most useful in deciding what our optimum statistics sample rate will be.
Viewing 2 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply