Blog Post

Effects of Max Mem on Plan Cache

,

Reading a Microsoft Article (which can be found here) while making sure I understood what can cause a query plan to be recompiled or removed from the plan cache, I got to thinking a bit.

In the article, it lists the following as things that can cause a plan to be removed from cache or to be recompiled.

  • Execution plans remain in the procedure cache as long as there is enough memory to store them. When memory pressure exists, the Database Engine uses a cost-based approach to determine which execution plans to remove from the procedure cache. To make a cost-based decision, the Database Engine increases and decreases a current cost variable for each execution plan.
  • Changes made to a table or view referenced by the query (ALTER TABLE and ALTER VIEW).
  • Changes made to a single procedure, which would drop all plans for that procedure from the cache (ALTER PROCEDURE).
  • Changes to any indexes used by the execution plan.
  • Updates on statistics used by the execution plan, generated either explicitly from a statement, such as UPDATE STATISTICS, or generated automatically.
  • Dropping an index used by the execution plan.
  • An explicit call to sp_recompile.
  • Large numbers of changes to keys (generated by INSERT or DELETE statements from other users that modify a table referenced by the query).
  • For tables with triggers, if the number of rows in the inserted or deleted tables grows significantly.
  • Executing a stored procedure using the WITH RECOMPILE option.

The first item removes a plan from cache while the rest mark the plan is invalid and a recompile will be forced on the next execution of the SQL Statement.  But then the question comes, why make the distinction between the two?  Doesn’t removing a plan from cache in essence equate to a recompile further down the road when the query is executed again?  While semantically it is not a recompile, it sure sounds a lot like one.  The query was compiled, invalidated and removed from cache and then compiled again when executed the next time around.

When thinking about all of this, how can we see that memory pressure can “invalidate” query plans?  That is actually quite simple.  We can see the same net effects with the following experiment.

WARNING: DO NOT DO THIS ON ANY ENVIRONMENT THAT IS NOT A SANDBOX ENVIRONMENT.

select count(*) as PlanCount
From sys.dm_exec_cached_plans
;
GO
EXEC sys.sp_configure N'max server memory (MB)', N'1024'
GO
RECONFIGURE WITH OVERRIDE
GO
select count(*) as PlanCount
From sys.dm_exec_cached_plans
;
GO
EXEC sys.sp_configure N'max server memory (MB)', N'8100'
GO
RECONFIGURE WITH OVERRIDE
GO

In this setup, I have a Sandbox server with a simulated load that keeps plenty of plans in cache and SQL Server is happy with about eight gigabytes of memory allocated to it.  To show that memory can cause a bunch of plans to be invalidated (and removed from the plan cache), I am going to take memory away from SQL Server and in essence simulate a memory leak or memory pressure.

This shows that the settings have taken effect on SQL Server and all of this without a Service restart (I keep hearing that myth).

Max Memory in SQL Server

Seeing that the memory settings are taking immediate effect, we can turn our attention to that query that was run.  In the query to experiment with this, I took a count of the number of plans in the plan cache first, then changed the max memory, and then took a count of the number of plans in cache again.  This next image shows the results of those two plan count queries.  It is obvious here that the number of plans in cache was severely impacted by this change in Max memory when taking TOO much memory away from SQL Server (this is why you should only do this on a sandbox server).

Max Mem effect on Plan Cache

 

If you happen to take too much memory away from SQL Server, you could end up in a severe memory pressure situation where connections are limited.  Even trying to increase max memory could be severely hampered due to insufficient resources in the default resource pool.  Here is a blog post that demonstrates the effects on connections due to this error (even as bad as SQL Server not starting up).

Since the problem is in the default resource pool, you could try a connection to the DAC and modify the max memory there.  If the pressure is severe enough, you may be limited in your ability to connect to the DAC as well.  In that case, you may be in need of starting up in single user mode as referenced in that prior blog post.  Sometimes, it is enough to reduce the number of connections or resource requirements to regain access to the server to change the max memory appropriately.  Suffice it to say, do not run this on a production type of box, lest you are ready to cause a bit of an outage.  The intent here is solely to share that max memory can have an immediate impact on compiles and plans in cache.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating