Deciding not to use a plan

  • Hi everyone. I need some help here. Until now I thought I understood how the procedure cache worked. We have been experiencing some performance problems.

    SQL Server 2012 EE SP3 x64. The server has 256GB of RAM with 222GB allocated to SQL Server. SQL Server is only using 38.5GB total. The procedure cache hit rate is averaging about 35-50%. Optimize for ad hoc workloads is on.

    I see the same simple queries being inserted into the cache each time they run. Here is an example.

    RPC:Startingexec sp_execute 5,'0','546FGGSD-DA34-422V-4FB1-FF2345FGGFS1'

    SP:CacheInsert(@P0 varchar(8000),@P1 varchar(8000)) UPDATE T_LOAD_MASTER SET PURGE_FLAG = @P0 WHERE PROXY_LOAD_ACCT_TOKEN = @P1

    SP:StmtStartingUPDATE T_LOAD_MASTER SET PURGE_FLAG = @P0 WHERE PROXY_LOAD_ACCT_TOKEN = @P1

    RPC:Completed...

    One second later it will run again and do another CacheInsert instead of a CacheHit. There are no CacheRemove going on (the plan cache is only 3MB)

    It seems that it doesn't think it can reuse the plans. Does anyone have an idea what I should check to address this issue?

    Thanks much,

    Howard

  • Right now plan cache issues is the LEAST of your problems, assuming your two fields in that update are not actually VARCHAR(8000)!!! Someone has set the database execution setting in your code to "hey, lets parameterize queries because I heard that was a good thing" and left default configurations which is making everything varchar parameters. This exact same thing resulted in one of the most egregious 3rd party apps I have ever had the pleasure of smacking down. Go fix that (making the parameters be EXACTLY THE SAME AS THE DATA TYPES. Watch how much better your server and queries perform.

    Then go look at other issues...

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Could be a bunch of things

    The one that comes to mind looking at it is that the table isn't schema-qualified, hence SQL could think that different users = different objects and hence must be different plans. This is one reason why schema-qualifying objects is a really good idea.

    The second is that you have 'optimise for ad-hoc workloads' on. This means that the first time a query is seen, all that's cached is a note to say that the query has been seen before. The second time, then the plan gets compiled and cached. It's to prevent plan cache bloat when you have lots of ad-hoc queries.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks Kevin and Gail. Very good observations. I'll be bringing both to the attention of the developers. It is an in house application so it is not surprising that the TSQL has not been evaluated and critiqued.

    I remember that originally all the SP parameters were coming across as unicode nvarchar(4000) and were being cast by SQL Server which was preventing index use. I asked that the Middleware admins change the Java connection string to include SendStringParametersAsUnicode=False and after that the parameters came across as varchar(8000) and used the SQL indexes.

    I have a few questions:

    Is the size of the variable specified in the prepare statement or somewhere else? Who needs to fix this? SQL developer or Middleware admins in Websphere?

    Is there a performance impact to using varchar(8000) parameters? Is it that the server must reserve the memory space to possibly hold that much data each time it is run?

    I did not mention it in the OP but this server with the low procedure cache hit rate is one environment. There are other environments that run the same unqualified code and have no problem hitting the cache. I'm hoping for a clue as to what makes the servers behave differently.

    EDIT: from my reading, it seems like normal behavior for java to specify either nvarchar(4000) or varchar(8000).

    http://stackoverflow.com/questions/2344838/why-jpa-hibernate-and-jtds-always-uses-varchar8000-for-string-columns

  • PHXHoward (1/29/2016)


    Is the size of the variable specified in the prepare statement or somewhere else? Who needs to fix this? SQL developer or Middleware admins in Websphere?

    Sounds like maybe the app is using jTDS. Whatever the driver though, the parameter lengths would be defined in the app code when building the command to send to the database.

    Is there a performance impact to using varchar(8000) parameters? Is it that the server must reserve the memory space to possibly hold that much data each time it is run?

    Technically it results in an implicit data type conversion (even with varchar(n) to varchar(n) with different n's). I have found that the optimizer can usually negotiate this and still use an index but I wouldn't put it past the optimizer to fail to do that in some scenarios and not use an index.

    I did not mention it in the OP but this server with the low procedure cache hit rate is one environment. There are other environments that run the same unqualified code and have no problem hitting the cache. I'm hoping for a clue as to what makes the servers behave differently.

    Same SP/Build level? Both optimized for ad hoc?

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Is there a performance impact to using varchar(8000) parameters? Is it that the server must reserve the memory space to possibly hold that much data each time it is run?

    YES YES YES, as I mentioned. It is EXCEPTIONALLY BAD in many cases: burns CPU to convert data, prevents getting accurate statistics (and that leads to DEVASTATINGLY bad query plans often, which not only harm performance but also concurrency - both in numerous ways) AND voids index seeks when appropriate causing index scans and you guessed it bad performance and bad concurrency.

    I am not an nhibernate person, but it was my understanding that it/it's connectivity mechanism CAN be programed/configured to use the exact correct SQL Server data type in it's queries. ADO.NET will do the SAME DUMB THING by default too, as did/does Entity Framework back in the day and possibly now too.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru (1/29/2016)


    Is there a performance impact to using varchar(8000) parameters? Is it that the server must reserve the memory space to possibly hold that much data each time it is run?

    YES YES YES, as I mentioned. It is EXCEPTIONALLY BAD in many cases: burns CPU to convert data, prevents getting accurate statistics (and that leads to DEVASTATINGLY bad query plans often, which not only harm performance but also concurrency - both in numerous ways) AND voids index seeks when appropriate causing index scans and you guessed it bad performance and bad concurrency.

    I am not an nhibernate person, but it was my understanding that it/it's connectivity mechanism CAN be programed/configured to use the exact correct SQL Server data type in it's queries. ADO.NET will do the SAME DUMB THING by default too, as did/does Entity Framework back in the day and possibly now too.

    Same thing happened to us with ColdFusion. All the parameters were coming across as nvarchar(4000). Our problem was that it was actually causing deadlocks between unrelated tables in the database. After two weeks of trying to debug the deadlock issue, the engineer working the problem changed the CF configuration to send the data as varchar(8000) and the scans were replaced with seeks and the deadlocks went away.

    CF11 allows for more granular typing of parameters and they are now sending the correct data types for the parameters. Amazing what using the correct data types can do.

  • OK, I'll see if I can get any traction on the issue of Websphere and/or Java defaulting to varchar(8000). I don't know how it is configured on that end so I'll have to ask someone else, maybe a middleware admin, to help with that issue.

    Thanks again for the good advice on these things.

    In the meantime, I would love to know why this one particular SQL instance that runs the same code is not reusing plans where as other environments that also have the same code are reusing plans.

    Instances are all 11.0.6020 with Optimize for ad hoc workloads on.

  • There are a lot of things that can affect query plan reuse, and most have some level of nuance or persnickety thing about them. With your stuff I would look at set statement differences by the executing mechanism and also the fact that you are not using 2-part naming (really improper thing there in any case). Check if default schema is the same between systems.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • I found that resource governor had been enabled on on the instance in an attempt to restrict ad hoc queries from impacting the application. Apparently the configuration was starving the system memory (RESOURCE SEMAPHORE waits). When I disabled resource governor, the procedure cache hit rate issue immediately resolved itself. I'm not sure what part of the resource governor config was causing this issue but intend to find out. I'm so glad to have it figured out and also very grateful for the advice regarding the size of the variables and also the missing schema designation in the queries.

    Best,

    Howard

  • For some reason, the min_memory_percent of the OLTPPool and ReportPool were both 50 which seems like a typo. That must have caused the system to be starved and lead to the procedure cache not keeping/reusing plans.

    CREATE RESOURCE POOL [OLTPPool] WITH(min_cpu_percent=50,

    max_cpu_percent=100,

    min_memory_percent=50,

    max_memory_percent=100,

    cap_cpu_percent=100,

    AFFINITY SCHEDULER = AUTO

    )

    GO

    CREATE RESOURCE POOL [ReportPool] WITH(min_cpu_percent=50,

    max_cpu_percent=100,

    min_memory_percent=50,

    max_memory_percent=100,

    cap_cpu_percent=100,

    AFFINITY SCHEDULER = AUTO

    )

    GO

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply