Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

Bad Query Plan Expand / Collapse
Author
Message
Posted Thursday, August 7, 2014 6:12 PM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 7:31 AM
Points: 4,472, Visits: 6,402
Robert klimes (8/7/2014)
Roger Sabin (8/7/2014)
I tried what you suggest and it does force the "bad" plan when I use 40 and a "good" plan when I use 17. Thanks.


So now you verified what is happening the "fix" is up to you.

As Kevin suggested, you can use optimize for (which I have used on a few occasions) which will always create a plan for a specific parameter or if you use unknown then it will build a plan based on all stats for all values. in either case you will not always have an optimal plan.

Another option is to add with recompile to the proc which will generate the best plan for each parameter at the expense of having to recompile each time it runs. Depending on your workload and resources this may be acceptable or not.

yet another option would be to refactor the proc so it always generates the same plan.


I was definitely NOT espousing the use of OPTIMIZE FOR as a SOLUTION for this issue - just to expose it. I DESPISE that "feature", because it GUARANTEES you will get a BAD PLAN for at least some of your executions, potentially many of them!


Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1600977
Posted Friday, August 8, 2014 7:52 AM


UDP Broadcaster

UDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP Broadcaster

Group: General Forum Members
Last Login: 2 days ago @ 10:03 AM
Points: 1,499, Visits: 2,814
TheSQLGuru (8/7/2014)


I was definitely NOT espousing the use of OPTIMIZE FOR as a SOLUTION for this issue - just to expose it. I DESPISE that "feature", because it GUARANTEES you will get a BAD PLAN for at least some of your executions, potentially many of them!


I apologize. I misunderstood that you were suggesting to try OPTIMIZE FOR to identify the plans for different parameters instead of correcting the issue. While I agree this isn't the best option to solve bad plans caused by parameter sniffing, it may be good enough or it may be the best option. Only testing the different options would identify that.

Re-factoring the procedure to get the best plan is ideal but sometimes not possible.


Bob
-----------------------------------------------------------------------------
How to post to get the best help
Post #1601182
Posted Friday, August 8, 2014 8:34 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: 2 days ago @ 3:26 PM
Points: 2,330, Visits: 3,509
My first choice would be to use RECOMPILE and just force SQL to rebuild a plan every time.

But, if a HASH join is the "good" plan, forcing a HASH join is much safer overall than forcing a LOOP join. You might try that for a range of values and verify that it works OK across all of them. This, too, may not be the "best" solution, but it should be a workable solution.


SQL DBA,SQL Server MVP('07, '08, '09)

Carl Sagan said: "There is no such thing as a dumb question." Sagan obviously never watched a congressional hearing!
Post #1601209
Posted Friday, August 8, 2014 11:03 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 7:31 AM
Points: 4,472, Visits: 6,402
ScottPletcher (8/8/2014)
My first choice would be to use RECOMPILE and just force SQL to rebuild a plan every time.

But, if a HASH join is the "good" plan, forcing a HASH join is much safer overall than forcing a LOOP join. You might try that for a range of values and verify that it works OK across all of them. This, too, may not be the "best" solution, but it should be a workable solution.


I am curious why you say HASH force would be safer. I would say just the opposite...


Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1601279
Posted Friday, August 8, 2014 11:19 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: 2 days ago @ 3:26 PM
Points: 2,330, Visits: 3,509
TheSQLGuru (8/8/2014)
ScottPletcher (8/8/2014)
My first choice would be to use RECOMPILE and just force SQL to rebuild a plan every time.

But, if a HASH join is the "good" plan, forcing a HASH join is much safer overall than forcing a LOOP join. You might try that for a range of values and verify that it works OK across all of them. This, too, may not be the "best" solution, but it should be a workable solution.


I am curious why you say HASH force would be safer. I would say just the opposite...


My thinking is:
LOOP is extremely -- even prohibitively -- expensive on a very large number of rows.
HASH might not be ideal for a smaller number of rows, but it shouldn't be awful either.


SQL DBA,SQL Server MVP('07, '08, '09)

Carl Sagan said: "There is no such thing as a dumb question." Sagan obviously never watched a congressional hearing!
Post #1601283
Posted Friday, August 8, 2014 12:10 PM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 7:31 AM
Points: 4,472, Visits: 6,402
ScottPletcher (8/8/2014)
TheSQLGuru (8/8/2014)
ScottPletcher (8/8/2014)
My first choice would be to use RECOMPILE and just force SQL to rebuild a plan every time.

But, if a HASH join is the "good" plan, forcing a HASH join is much safer overall than forcing a LOOP join. You might try that for a range of values and verify that it works OK across all of them. This, too, may not be the "best" solution, but it should be a workable solution.


I am curious why you say HASH force would be safer. I would say just the opposite...


My thinking is:
LOOP is extremely -- even prohibitively -- expensive on a very large number of rows.
HASH might not be ideal for a smaller number of rows, but it shouldn't be awful either.


Expensive in lots of logical IOs, yet. But those can be exceedingly quick due to cached iterative hits on same page for multiple rows. More importantly from my experience is the page locks that will (hopefully) be taken which can DRASTICALLY improve concurrency. Those blocking index/table scans are a killer from that perspective.


Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1601299
Posted Friday, August 8, 2014 12:15 PM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: 2 days ago @ 3:26 PM
Points: 2,330, Visits: 3,509
TheSQLGuru (8/8/2014)
ScottPletcher (8/8/2014)
TheSQLGuru (8/8/2014)
ScottPletcher (8/8/2014)
My first choice would be to use RECOMPILE and just force SQL to rebuild a plan every time.

But, if a HASH join is the "good" plan, forcing a HASH join is much safer overall than forcing a LOOP join. You might try that for a range of values and verify that it works OK across all of them. This, too, may not be the "best" solution, but it should be a workable solution.


I am curious why you say HASH force would be safer. I would say just the opposite...


My thinking is:
LOOP is extremely -- even prohibitively -- expensive on a very large number of rows.
HASH might not be ideal for a smaller number of rows, but it shouldn't be awful either.


Expensive in lots of logical IOs, yet. But those can be exceedingly quick due to cached iterative hits on same page for multiple rows. More importantly from my experience is the page locks that will (hopefully) be taken which can DRASTICALLY improve concurrency. Those blocking index/table scans are a killer from that perspective.


I've just not had the experience of loops being "exceeding quick" once the number of rows gets too large. Indeed, to me it seems that often the only reason SQL is using a loop is that it couldn't accurately pre-determine the cardinality of rows.


SQL DBA,SQL Server MVP('07, '08, '09)

Carl Sagan said: "There is no such thing as a dumb question." Sagan obviously never watched a congressional hearing!
Post #1601300
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse