Real World Query Plans

  • Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/sjones/realworldqueryplans.asp

  • Thanx for the quick reply to our requests about the end of that story .

    Has anyone used this technic before of running the proc every X min to ensure the right query plan?

  • I saw a good solution to this problem on Ken henderson blog

    http://blogs.msdn.com/khen1234/default.aspx

    which suggest a way of ensuring the correct plan is kept and avoids having to run it every minute.

    regards

    David

  • A very interesting topic all round, especially for me as it's what I'm currently trying to look at doing on our system.

    As they say 'Every little helps' and I for one am glad to add this nugget to my list

  • Good article, good to see combined wisdom from an expert and people from MS .

    The solution doesn't sound too great, but at least it's a solution!

  • I've had similar issues to this where data growth on the system caused a previously high performing query to become very slow.  On investigation I found that it was doing table scans of some large tables in a situation where it should have been doing nested loops as only a very small number of rows were needed.

    To fix it I specified optimizer hints forcing the query to use specific indexes which forced the query plan back to the prefered nested loops.  Not a perfect solution as often we don't have time to rewrite or hint our queries to get a specific query plan but it does work.

  • I also had a similar problem at work.

    I found out, the Procs were being run with totally unnormal (extreme) parameters at first to test the proc. When finally the proc was used, the query plan was optimized for the extreme parameters and not for the day to day standard things that it normally received.

    After the tests were rebuilt, to first execute the procs with *standard* params everything worked fine, except for the odd occasion, where the proc actually receives real extreme data (of course the query plan is then optimized for the normal data and the queries take longer) but as that is once in a blue moon I can live with that. I don't know if running a proc by default with normal params every minute would really help ... if the proc is THAT important it's plan shouldn't be chucked out too often anyway should it???

    If the difference in execution is sooo extreme, then I would rather have 2 stored procs - one for extreme data and one for normal, that way each would be optimized as far as can be for the data they receive - of course there would have to be a switch on the front end ... but then again, if it makes the users happy

    nano

  • This drives me mad - previously some stored procs run in 100 ms or less - then suddenly jump to 2000-5000 ms without a by or leave.

    We have tried using index hints in the past but data changes over time and the index hints start to slow the system down too.  There's got to be a way of forcing the server to use the good plan permanently.  We run the same stored proc millions of times a day (real time data collection) to insert data.  I am sure it is when the size of the tables reaches a threshold that SQl decides to change the plan. 

    If you can identify which tables are the main culprits - I would run an update stats task (autoupdate doesn't hack it for me) - I also run an index rebuild on some of the smaller tables which are mainly varchars to bring them back in line.  This improves things but SQl still manages to throw a curve ball every month or so.

    Sim

    Manchester UK


    Cheers,

    Sim Lever

  • I've run into this same problem many times, myself.

    Remember: the plan that gets cached is the result of the first call to the stored proc. In our system, we have both large and small customers (data wise). If a smaller customer is the first one to call the proc, it will be optimized to handle the smaller data returns, which trashes performance for the larger customers.

    I've seen procs that run in <1 sec jump to 8 secs if the plan is a bad one, but query analyzer will still run it in 1 sec. Not sure why you end up with 2 different plans based on login, but somehow it does.

    In our case, I just added the WITH RECOMPILE option to the stored proc in question (only 1 was really causing this problem), and it has since prevented it from showing back up again.

    IMHO, the smaller hit for the recompile based on the current parameters is much less than relying on the initial call to generate the best plan for both large and small data sets.

    Just my $0.02 input on this sometimes very annoying issue.


    Greg Walker
    DBA, ExpenseWatch.com

  • I have a similar problem with a simple 3 table join in a proc which table scans ( to return 1 row ) but if the query is put in QA uses the indexes  12k i/o vs 12 i/o  .  I did loads to the proc and it refused to use the indexes, in the end I dropped the proc and put it back and everthing worked fine - as I say this is a simple query taking 2 int params to return a single row result.

    I did all the stuff with the stats including 100% sampling but to no avail - strange!

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • I have lot of procs that take 10-100x longer to compile than to actually run, so WITH RECOMPILE not always acceptable option. 

    Last year after getting frustrated by a few similar experinces, I started using OPTION (KEEPFIXED PLAN) to try to minimize optimizer's futzing around w/things after I get them the way I want them.  So far have not had any problems with this, wonder if anyone else has?

    Seems to me that w/carefully tuned procs I almost never want the optimizer messing around with them once deployed. 

  • Thanks for the tips folks - I am going to experiment with Recompile and the keep plan option - sounds good to me - cheers

    Sim


    Cheers,

    Sim Lever

  • I had a similar situation on my last gig. The client had a 4 way box with 8 gb of memory. Procedures on the production application suddenly began taking a verly long time to run. Processing time and record counts went way up.

    My first solution was to use hints to force the order of the joins and that worked but seemed to defeat the purpose of the optimizer so I kept looking for a better explanation/solution. Came across parameter sniffing and that looked like THE solution. Procedures went from 30 minutes to 30 seconds. After a while however, the procedures went back to outrageous run times. It was all very confusing. The same procs ran fine on a single processor box. We identified a good execution plan and a bad plan. The good plan used paralellism. The bad plan did an index scan with no paralellism. The single processor machine ran the offending proc in about a minute.

    We tested and tried a variety of solutions. We knew that the range of parameters and the choice of execution plans were somehow confusing the optimizer. With Recompile did not provide any benefit. Our solution was to break down the procs into smaller chunks of work and focus on keeping statistics up to date.

    Mardy

     

     

     

  • We have experienced similar problems.

    I always suggest to drop and recreate the stored procedures that drags..and it worked 99%.

    Thanks for your tips Mr.Jones.

     

    Baskar

     

  • ...Hopefully the "Plan Guides" feature in Yukon is the answer to all of our prayers...

Viewing 15 posts - 1 through 15 (of 22 total)

You must be logged in to reply to this topic. Login to reply