Stored Procedures and Caching

  • Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/bkelley/procedurecache.asp

    K. Brian Kelley
    @kbriankelley

  • I like this article. Even though you say it only touches the surface of the subject it goes into it enough to give me a better understanding of stored procedure caching.

    I have a couple questions about portions of your article.

    First, when you say that a stored procedure becomes serialized when being compiled and as such can cause bottlenecks, do you mean that while being compiled no one can execute it until the compile is finished or does this just mean that it can't be modified until the compile is complete?

    Second, I tried a test of running a stored procedure not in the master that started with sp_ and the same one in the same database that didn't start with sp_. The difference in milliseconds was very small. These were simple stored procedure. Would the benefit in speed be greater with a more complex stored procedure?

    Robert Marda

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • It indeed causes blocking on the stored procedure meaning it's a wait in line type of thing. Here's a quote from the Q article that made my eyes widen with fear:

    quote:


    If many connections are simultaneously running the same stored procedure, and a compile lock must be acquired for that stored procedure each time it is run, it is possible that system process IDs (SPIDs) may begin to block each other as they each try to acquire an exclusive compile lock on the object.


    Until I started doing the research on caching I didn't realize that blocking could take place in this manner.

    As for the sp_ prefix, a SPID is having to get the [COMPILE] lock before it can begin the second search. But the [COMPILE] lock is exclusive, meaning it needs the stored procedure all to itself. I haven't tested it, but I assume this means that a SPID waiting on an exclusive [COMPILE] lock is having to wait until the other SPID completes execution, since it needs the stored procedure exclusively. If this is true, then more complex stored procedures would indeed cause some serious delays. I'm going to have to put a WAITFOR DELAY in an sp_ and see what happens to be sure.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • Excellent article Brian. It is one of the best written articles I have seen on stored procedure execution on the server.

    One issue I was wondering if you'd look at is a stored procedure that runs two different types of selects such as:

    create procedure MyProc

    @id1 int = null, @id2 int = null

    as

    if @id1 is null

    select * from MyTable where id2=@id2

    else

    select * from MyTable where id1=@id1

    return

    Does this cause a recompile if different parameters are used?

    Steve Jones

    steve@dkranch.net

  • I haven't looked into it and it's another area I need to do more research. In my production environment where DBAs were allowed only in an advisory role, developers coded stored procedures using the CRUD methodology. Okay, fine, different stored procedures covering each, right? No, they did it in a single stored procedure. I'm assuming a recompile has to happen based on what operation they perform, but I haven't confirmed it with testing.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • Good article! How does exec() work compared to sp_executesql? Lately I've been trying to develop the habit of dbo qualifying all my sql...not there yet! And as far as dynamic sql and base table access, I've got a thought and a question or two that I'll post separately!

    Andy

  • Another thing to investigate! I've seen all the standard "Microsoft prefers you use sp_executesql" but I've not run any specific tests. Since it's a "(server internal)" system stored procedure, I'm thinking it has additional code to assist with caching, but that's just a guess.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • This article has allowed me to bring down the execution time for an SP from 3 secs to 0.5 secs. Great Job. I suprised that this information is not present itn books online. Especially since SQL Server recommends the use of Temporary tables over Cursors. However I found that every DML on a temporary table causes the SP to recompile. This can be avoided by embedding all Temp Table DML's within sp_executeSQL. This brought the execution time of my SP from 2.5 to 0.5 secs.

    I have one more observation

    This is with respect to

    SET ANSI_WARNINGS ON

    Here is the part of the profiler trace when a SP contains this statement

    Event Duration

    SP:CacheHit 0execute sp_checkavail

    SP:CacheMiss 0sp_checkavail

    SP:CacheMiss 0sp_checkavail

    SP:CacheMiss 0sp_checkavail

    SP:CacheRemove 0sp_checkavail

    SP:CacheInsert 0sp_checkavail

    SP:Starting 0sp_checkavail

    SP:StmtStarting 0-- sp_checkavail SET NOCOUNT ON

    SP:StmtCompleted0-- sp_checkavail SET NOCOUNT ON

    SP:StmtStarting 0-- sp_checkavail SET ANSI_WARNINGS OFF

    SP:StmtCompleted0-- sp_checkavail SET ANSI_WARNINGS OFF

    SP:StmtStarting 0-- sp_checkavail IF @bedrooms = 'H' or @bedrooms = 'S'

    SP:Recompile 0sp_checkavail

    SP:CacheMiss 0sp_checkavail

    SP:CacheMiss 0sp_checkavail

    SP:CacheInsert 0sp_checkavail

    SP:StmtStarting 0-- sp_checkavail IF @bedrooms = 'H' or @bedrooms = 'S'

    SP:StmtCompleted 391-- sp_checkavail IF @bedrooms = 'H' or @bedrooms = 'S'

    When I comment out the SET ANSI_WARNINGS OFF

    the trace looks like

    Event Duration

    SP:CacheHit 0execute sp_checkavail

    SP:CacheMiss 0sp_checkavail

    SP:ExecContextHit 0sp_checkavail

    SP:Starting 0sp_checkavail

    SP:StmtStarting 0-- sp_checkavail SET NOCOUNT ON

    SP:StmtCompleted 0-- sp_checkavail SET NOCOUNT ON

    SP:StmtStarting 0-- sp_checkavail SET ANSI_WARNINGS OFF

    SP:StmtCompleted 0-- sp_checkavail SET ANSI_WARNINGS OFF

    SP:StmtStarting 0-- sp_checkavail IF @bedrooms = 'H' or @bedrooms = 'S'

    SP:StmtCompleted 0-- sp_checkavail IF @bedrooms = 'H' or @bedrooms = 'S'

    Any Comments

  • From the Q article (Q243586):

    quote:


    Recompilations Due to Certain SET Statements Executed in Stored Procedure

    The following five SET options are set to ON by default:

    * ANSI_DEFAULTS

    * ANSI_NULLS

    * ANSI_PADDING

    * ANSI_WARNINGS

    * CONCAT_NULL_YIELDS_NULL

    If you execute the SET statement to set any of these options to OFF, the stored procedure will be recompiled every time it runs. The reason for this is that changing these options may affect the query result that triggered the recompilation.


    That explains the recompiles with the SET statements.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • Fantastic article Brian.

    I noticed with a proc that I'm running, that it recompiles the proc based on the parameters that are being passed to it. So if you run the proc with param A, it does not recompile for params B, C and D. But the minute you run it with param E it initiates a recompile. And the funny thing is that when you exec the proc again with param A it initiates a recompile once more. I analyzed the SP statements in Profiler and noticed that the one that causes the recompile had a couple more temp tables that it was using based on the data.

    So the first thing I did was to bring all the DDL script to the top of the proc. This seemed to result in fixing the problem, the speed improved from 90 secs to 50 secs on the first run, and then using the cached query plan to 2-5 secs depending on rows returned and using param E did not result in a recompile. But only for a while. All of a sudden, a couple of hours later, I noticed it went back to its old behaviour. One thing I noticed was that one of the other DBAs had changed the number of processors that was being used by SQL Server from 4 to 3. Resetting this did not change anything. Is there anything that could have been reset at a database level that might have started the problems with this proc again??? Could something have changed with tempdb or any of the dboptions?

    Thanks for any suggestions

    John

  • Actually found that the problem was because I was compiling the proc with ANSI_NULLS OFF, the minute I changed it back to ON the recompiling stopped once the plan was available in cache.

    Now that that part of the problem was taken care of I started finding some other peculiar behavior. When I run the proc from Query Analyzer from different client PCs it uses the cached plan. The minute I run the same proc from the Internet application for which it was designed, it seems to recompile again and again from the different client PCs if you run the app it uses the cache plan. Going back to Query Analyzer again flushes the cache.

    It appears like running the proc from two different apps results in flushing the cache. Have you seen this kind of behavior and what is the reason for it? Is there a way around it at all?

    Thanks for any help,

    John

  • You may want to explicitly set the ANSI_NULLs on with respect to the Internet app. Perhaps they are being set to OFF and that's causing the recompiles?

    I'll be honest in the web apps we have running where I work don't have similar recompile problems, so I'm reaching for possibilities.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • I shall try and do this. My only confusion then is how the Internet app by itself behaves like ANSI_NULLS are set to ON. What I mean is the first time you run it, it takes a while to create the plan but thereafter it pulls the plan from cache. But the minute you run the proc from Query Analyzer(QA) it again reloads the plan taking a long time and then it caches it for all queries run from within QA. And then you revert back to the app and you again see the rebuild of the query plan. Almost like QA steps on the internet app and vice versa. But I shall try and reset the ANSI NULLS through the app and get back to you on the findings.

    Thanks for the quick response

    John

  • I also realized that since the proc has been compiled with ANSI_NULL set to ON it should always be running under the ANSI_NULLS ON. I ran the proc from a QA window with the Connection properties set to have ANSI_NULLS OFF and it did not cause the proc cache to get flushed or recompile of the proc, because the proc had been compiled with the ANSI_NULLS ON.

  • I also realized that since the proc has been compiled with ANSI_NULL set to ON it should always be running under the ANSI_NULLS ON. I ran the proc from a QA window with the Connection properties set to have ANSI_NULLS OFF and it did not cause the proc cache to get flushed or recompile of the proc, because the proc had been compiled with the ANSI_NULLS ON.

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply