anyway to avoid precompilation of stored proc

  • Hi,

    I have noticed that a stored proc takes up to 40 seconds when first run, and then less than 1 second on subsequent calls. This as indicated in an excellent article (http://blog.sqlauthority.com/2010/02/18/sql-server-plan-recompilation-and-reduce-recompilation-performance-tuning/), is because each time a new connection is made to the database an entry in dm_exec_cached_plans has to be created. Once created the databasehas an execution plan and thus the query will be much faster.

    This explains why if you stop and restart sql server this entry in dm_exec_cached_plans will dissapear and thus need to be replaced by running the procedure again.

    We are about to deploy our solution to the client site, and I want to ensure that this application (which is a silverlight app with SQL Server back end) runs as fast as possibloe first time.

    From my undertstanding I would need to run the stored proc before hand so that an entry is made to dm_exec_cached_plans.

    My questions around this are:

    1) If the data in the underlying tables are the same, will different parameter varaiations need to be called with the stored proc, so that new entries in dm_exec_cached_plans can be inserted, or will the same stored proc use the same dm_exec_cached_plans entry regardless of parameter values.

    I ask as the article mentions that:

    "It could be that the data amount in underlying tables changed so much that the previously created plan is not optimized"

    This refers to the underlyting physical table, which is different to the derived table that my stored proc will return, which depending on parameters passed can return very different amounts of data.

    2) In the same vein as above do I need to log in as each user from the silverlight application, in order that an execution plan is stored for each one, or it is sufficient to simply log in as a simple user, which will in turn create an entry in dm_exec_cached_plans, and thus each user will get the page faster when they first log in?

    Thus you can see what I want to do, and can set things up, whereby on a new deployment or restore we carry out a process of running cerstin stored procs or logging into application as different users, so that the user experience is as good as can be.

    Any suggestions, to what I am sure is an existing and hopefully already solved issue would be greatly appreciated:-)

  • It's highly unlikely that the 40 seconds is the optimisation of the procedure. Unless you're talking several thousand lines of code and hundreds of queries (or nested views several levels deep), optimisation is usually very fast.

    What's very likely the cause is the first run of the queries has to pull the data off disk, latter runs read it just from memory. You can check by running the procedure, clearing the data cache (DBCC DROPCLEANBUFFERS on a dev/test server only) and then running it again. That'll use the cached procedure but have to fetch the data off disk again.

    As for plan caching, it's not every time a connection is established. There's only one plan in cache for any procedure (well, mostly), not one per parameter value, not one per user. Also plans can and will be discarded from cache as the SQL engine needs, and as data changes in the tables plans will need to be recompiled.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi Thanks for getting back so quick, yes I ran this and still takes as long, so presumably the slowness in is reading from disk to memory.

    Presumably this cannot be preserved and from a deployment point of view will need to happen each time, i.e. a slow response first time and then quick on subsequent calls, by the user I mean.

    I ask as will get asked by the client 'why is the app slow when I first log in, but then fast after that'?

  • You could always write some standard select statements for regularly-accessed data, to 'prime' the cache after deployment.

    Even execute some of those procs if they don't do any DML.

  • chambers_sh (8/21/2012)


    Presumably this cannot be preserved and from a deployment point of view will need to happen each time, i.e. a slow response first time and then quick on subsequent calls, by the user I mean.

    No, it'll be slow the first time the data is accessed (and first time after restarts).

    You could try improving the throughput of the IO subsystem or reducing the amount of data the procedure pulls. 40 sec of data access is a lot (unless you have a dog-slow storage subsystem). I can pull a few million rows off my desktop's cheap, slow hard drive in under that.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi thanks for that.

    I cant do anything about the server as client's machine and wont budge in improving the spec of.

    There is not actually a lot of data returned 30 rows as is paged, so it is not the amount of data just the query i think?

  • Maybe.

    How much data it returns is not the question. The question is how much data does it have to read through to get those 30 rows. If it's a couple of GB of data, that's going to be slow no matter what.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • ok thanks for confirming in any case cheers:-)

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply