Blog Post

TechEd - Query Performance

,

A program manager asked the question at the start:  Who's had a query performance issue where a query "went bad" on the server?

Lots of hands up, and that was one of the things I wanted from this session.

What's reliability? MS sees this as two things

 - Quality: Given the criteria, you get satisfactory performance from your code at a point in time.

 -  Consistency: the query performance is satisfactory over time. No surprises.

Is there a tradoff? The presenter said so, better performance might come, but this may cost consistency. I'm not sure I agree with this. There was an admission that they tended to err on the side of better performance rather than consistency.

Query performance is diverse. It really depends on  what you want. Throughput, wall clock time, CPU time, Disk I/O, concurrency, etc.

The Query Processor (QP) needs to deal with this. It must make tradeoffs and the big one is compilation time v execution time. The first thing is to pick an execution plan. They could spend more time in picking plans, but it doesn't pay off unless the query will be executed many times. Should they worry about concurrency or assume all queries are executed singly. They don't do a lot of work to consider locking.

The result is that the "optimial" query plan isn't chosen. A "good enough" plan is picked.  However, MS realizes that reliability (consistency) is important.

What does the QP do (features)?

  • Column Statistics - created automatically.
  • Multi-Column Statistics - not created automatically except for indexes that cover those columns. If you have multiple column queries, build an index to help the QP.
  • Sample Rate - defaults to a few perecnt (Depends on table size), but there is an option to do a full scan.Sample works well, but there are a few places where fullscan helps. You want to allow these to be created (keep auto statistics on) and don't delete them.
  • You do need up to date statistics, so if the data changes significantly (size or distribution), you want to be sure that these are updated. This should be automatic, but be aware of this if you have performance problems.
  • Database Constraints - How does this help? If you have implemented DRI, the QP can understand some properties about the design. It knows that if this exists, they can infer there are matching rows in tables with PK/FK constraints, so they can better reason how to join these tables, possibly removing joins.

Query Hints

 There are a number of hints. The OPTION Clause allows hints that influence a particular query. There are other places in the SELECT statement you can use to influence joins for grouping, degree of parallelism, join order, parameterization, sniffing, etc. There is also a way to force a particular plan.

The best practice is to use hints only if you cannot solve an issue any other way.

Why doesn't SQL Server pick a good plan?

- Estimation issues even if statistics are up to date. They may under or over estimate the number of rows.

- Compile time might be too large, so perhaps that they think there is a "good enough" plan.

Some Best Practices

  • parallelism hints work well with highly concurrent, low complexity workloads
  • Paramaterize more aggresively in uniform and stable data distribution loads
  • Recompile often in higher complexity, skewed, and changing data distribution cases
  • Most hints and plan forcing make sense in workloads with few ad hod querys and mostly modules (procedures) and pre-compiled batches. 

Plan Forcing

 when you have a USE PLAN hint when submitting the query.  This provides an XML SHOWPLAN document that encodes the plan and the QP uses this. It's part of the query text and needs to ship with the application. Changing this means changing the app.

Plan Guides
These are not part of the application, applies query hints and are added by the DBA. These are like indexes in terms of guiding the QP in how to choose a plan.  The idea here is to fix performance problems with a particualr application. Usually a third party or custom application. If you understand why the query performs poorly, but you cannot change the application (change the SQL), you can use plan guides to influence the QP to perfrm better.

This was introduced in SS2K5. Useful, but hard to understand. DBAs having a hard time with this. I haven't seen a lot about it, so I probably concur.

This works by the application submitting a query. The query is compared to plan guides and matched up. If the plan guide matches a query, then the QP tries to apply those hints to the  query. The hints are honored if there is a successful match.

Plan guides are exposed through SMO.

USE PLAN

USE PLAN is a big part of SQL Server 2008.

DML is supported in 2008, guides can be created from history after the application has been tuned.

Profiler trace events are added to monitor plan guides

Also perfmon additions. In SSMS as a new folder as well. This includes the ability to script these from SSMS.

The engine has additions to allow you to freeze a plan with sp_create_plan_guide_from_handle.

The plan handle can be gotten from dm_exec_query_Stats.

In 2005, if an index was removed, plans failed. This is fixed, if the plan guide refers to an object that doesn't exist, then the query will build a new plan and execute.

Not sure if XevenTs are plugged in here.

Profiler has new events, plan guide successful (match), and unsuccessful (not matched or not used).

Development support with validation functions (sys.fn_Validate_plan_guide) 

One interesting demo. If you want to remove an index, you can actually script a check to see if the index is being used by any plan guides and then commit or rollback the drop, depending on results.

The scenario here is that you would test and build the plans on a dev or test system, and then script and move these plans to the production environment. This does mean that you need the same database names, same indexes, same data, etc. Even hardware should be the same.

You can also keep a history of good plans in your system. Lock them down by disable the plan guides. Then you can go back if something breaks in the future.

The upgrades are a godo use for disabling and locking those for upgrades beyond SQL Server 2008. If the plan upgrades have issues (new plans for enabled plans, you have copies. There is no guarentee that upgraded servers will perform well or use those plans.

If you mark a procedure with a plan guide, the current plan is evicted from the cache (nice term from the presenter) and recompile the stored procedure. Some plans guides are recompiled, but if you specifically use a plan in cache, there is no compilation.

Resource Govener

Helps manage resources

Can mark groups as low, medium, high, and so workloads get preference depending on how they compare to other workloads.

You can also set pools, such as an admin pool that "gets" 10% of memory, limits CPU, etc.

Question: If there is extra memory available, can a pool go above the limits?

Answer: No, because memory is sticky and can be slow to get moving. CPU can exceed if there are extra resources.

Allocations don't need to add to 100%. If Pool1 has 90% max CPU, Pool 2 has 20% CPU max, they compete for the extra 10% if both are busy. This is a limit, not a guarentee. 

How does DAC fit in? Not sure how it should be configured and should it have extra resources guarenteed?

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating