Is it possible to limit Maximum Workspace Memory or the portion of it allocated to a single query?

  • Hi all,

    We have an issue whereby over-complex filters within queries generated by a report engine are resulting in wildly overestimated cardinality estimates.

    In turn, this leads to wildly overstated requests for workspace memory for sort operations. In fact, a single report execution would often request (and get) the full allocation of workspace memory; that is to say, the SQL Server's Max Workspace Memory divided by the 4 NUMA nodes.

    This is really upsetting our reporting SQL Server.

    There seems to be very little we can do for some reports to help with the over estimates in cardinality and, in turn, the workspace memory requested and granted.

    Is there any way we can limit the amount of workspace memory in a sql server instance, or the portion thereof granted to an individual query execution?

    (It looks like Resource Governor might be one, but we have some reasons this would be difficult to implement).

    Many thanks for any help you are able to provide.

    Kind regards,

    James

  • Jimmy M (10/26/2016)


    Hi all,

    We have an issue whereby over-complex filters within queries generated by a report engine are resulting in wildly overestimated cardinality estimates.

    In turn, this leads to wildly overstated requests for workspace memory for sort operations. In fact, a single report execution would often request (and get) the full allocation of workspace memory; that is to say, the SQL Server's Max Workspace Memory divided by the 4 NUMA nodes.

    This is really upsetting our reporting SQL Server.

    There seems to be very little we can do for some reports to help with the over estimates in cardinality and, in turn, the workspace memory requested and granted.

    Is there any way we can limit the amount of workspace memory in a sql server instance, or the portion thereof granted to an individual query execution?

    (It looks like Resource Governor might be one, but we have some reasons this would be difficult to implement).

    Many thanks for any help you are able to provide.

    Kind regards,

    James

    1) I don't know that you can limit available memory. But I note that if you could then your queries would possibly fail to execute because they don't get their memory grant. I also note that you would have to also somehow disable spilling "memory" to tempdb as well.

    2) Have you put OPTION (RECOMPILE) on all of your report queries? I consider this to be almost mandatory on reporting boxes, and definitely mandatory for certain classes of queries such as @StartDate, @EndDate ones or the dreaded IS NULL OR scenario.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • It would probably be best to first approach this issue by confirming that statistics on tables and indexes are up to date and also by simplifying the JOIN and WHERE clauses on the queries, confirm that expressions are sargable and not performing unnecessary scans and hash joins.

    However, for what it's worth, v2012 did introduce these two query hints: MIN_GRANT_PERCENT and MAX_GRANT_PERCENT.

    https://support.microsoft.com/en-us/kb/3107401

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Thanks for your responses.

    For where we are right now, the criticality of the problem, and how little we are able to simplify the T-SQL in the offending reports in the immediate term, MAX_GRANT_PERCENT is a good option.

    I've done a quick proof of concept, with positive effect, and asked that the Report Development team see how easily it can be integrated. So thank you for that 🙂

    As to your helpful points to look out for .....

    Re spilling: The sorts of numbers we are talking about here are 160 GB of workspace memory allocated to a single query that, according to sys.dm_exec_query_memory_grants, requires a few GB of workspace memory at the most. So we are confident we can find a healthy middle ground where we would prevent these massive over-allocations of memory without restricting a query execution's workspace memory so much as to risk spilling to tempdb.

    Option Recompile: I've previously suggested it to the Reporting Dev team in another context; other performance issues. So it's in place to some extent. It's a good reminder .... maybe we should indeed push for this as a standard as you suggest. Specifically in this context, this would certainly prevent a plan needlessly requesting a (overstated) 100 GB of workspace memory for a date range of, say, 1 year, when it could request a (still overstated but healthier) 10 GB of workspace memory for a date range of, say, 1 month. Also, if we can't guarantee that each execution gets its own plan, i cannot speak so confidently about the numbers i mentioned above. That is to say, sys.dm_exec_query_memory_grants is going to tell me how little workspace memory is actually required by the currently executing query; not the one that got the plan compiled in the first place (which may have required closer to the requested amount).

    Statistics quality: Our stats are up-to-date and with FULLSCAN, but "watered down" (as I like to put it) by extreme complexity in the filters. I believe, if i were to manually merge together all of the available statistics (mostly all density I'm afraid) in the filters, I, too, would come up with these massive overestimations! That said, there must be, surely, some optimisations we can make in terms of SARGability, and we will soon renew efforts to tackle these 1 report at a time. Right now, we're looking to stop the bleeding.

    Thanks again. I will endeavour to post back any interesting finds from this exercise, including effective optimisations in the filters.

    J

  • Jimmy M (10/27/2016)


    Thanks for your responses.

    For where we are right now, the criticality of the problem, and how little we are able to simplify the T-SQL in the offending reports in the immediate term, MAX_GRANT_PERCENT is a good option.

    I've done a quick proof of concept, with positive effect, and asked that the Report Development team see how easily it can be integrated. So thank you for that 🙂

    As to your helpful points to look out for .....

    Re spilling: The sorts of numbers we are talking about here are 160 GB of workspace memory allocated to a single query that, according to sys.dm_exec_query_memory_grants, requires a few GB of workspace memory at the most. So we are confident we can find a healthy middle ground where we would prevent these massive over-allocations of memory without restricting a query execution's workspace memory so much as to risk spilling to tempdb.

    Option Recompile: I've previously suggested it to the Reporting Dev team in another context; other performance issues. So it's in place to some extent. It's a good reminder .... maybe we should indeed push for this as a standard as you suggest. Specifically in this context, this would certainly prevent a plan needlessly requesting a (overstated) 100 GB of workspace memory for a date range of, say, 1 year, when it could request a (still overstated but healthier) 10 GB of workspace memory for a date range of, say, 1 month. Also, if we can't guarantee that each execution gets its own plan, i cannot speak so confidently about the numbers i mentioned above. That is to say, sys.dm_exec_query_memory_grants is going to tell me how little workspace memory is actually required by the currently executing query; not the one that got the plan compiled in the first place (which may have required closer to the requested amount).

    Statistics quality: Our stats are up-to-date and with FULLSCAN, but "watered down" (as I like to put it) by extreme complexity in the filters. I believe, if i were to manually merge together all of the available statistics (mostly all density I'm afraid) in the filters, I, too, would come up with these massive overestimations! That said, there must be, surely, some optimisations we can make in terms of SARGability, and we will soon renew efforts to tackle these 1 report at a time. Right now, we're looking to stop the bleeding.

    Thanks again. I will endeavour to post back any interesting finds from this exercise, including effective optimisations in the filters.

    J

    1) The min/max grant memory requires resource governor, which you said wasn't viable for some reasons.

    2) OPTION (RECOMPILE) is definitely the report query tuner's friend. And if you use it you are GUARANTEED to get a fresh plan for every run of a query because it tells the optimizer to not bother storing the plan.

    3) If you can do a proof of concept you might want to stand up a SQL Server 2016 (or at least 2014) server of comparable hardware if possible. Then you can see if the new cost estimation system results in better (or worse) plans. SQL Server 2016 would get you the benefit of the TREMENDOUS "it just runs faster" (look that up for numerous blog posts from PSS on it) performance improvements they FINALLY did with many aspects of the engine.

    4) Have you considered/tested column store indexes yet?

    5) I don't think you mentioned your server RAM number yet. How much do you have? Is it maxed out on the hardware? If not, I have an idea for you. 😀 If it is, I have another idea for you. 😎

    6) Sounds like you have some really interesting stuff going on!!

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • One more thing for grins and giggles: have you tried Trace Flag 4199?

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • 1) Just to be clear, do you mean the hint MAX_GRANT_PERCENT requires Resource Governor? Only, it seems to be working fine without us enabling Resource Governor, insomuchas it's limiting the requested memory as instructed.

    2) Yep, agreed, we'll definitely revisit this.

    3) That's a good idea! Our 2016 lab is significantly smaller right now; but then, I only really need to see to what extent the filter complexity throws out the cardinality estimations in 11 vs 12/13 compatibility mode. That would be enough to guide us. Better still perhaps, and to your point about trace flag 4199, this trace flag on 2012 is something we can test in a full size environment. We'll look into that.

    4) Definitely considering Column Store indexes, but we're waiting until we upgrade from 2012 to 2016 to get some of the latest improvements in Column Store indexes. (Finally i make mention of the fact we're on 2012, SP3!)

    5) So, we have 1 TB RAM on the machine (physical server). 840 GB is allocated to SQL Server, of which 640 GB is showing as Maximum Workspace Memory. ** Is memory maxed out, well, in effect; taking into account our comfort levels 🙂

    But I'd be grateful to hear your ideas both for the yes and no answers to "Is it maxed out on the hardware?" !!

    ** We have 4 NUMA modes so the maximum workspace each troublesome query is being allocated is one quarter of that, c. 160 GB. That is until we specify the hint "MAX_GRANT_PERCENT = 10", which results in an allocation of 16 GB.

    Which leads me to Bonus Q1: On the subject of confirming that workspace memory requested (and granted) is influenced by MAX_GRANT_PERCENT, one thing I have noticed is that we're getting spills to tempdb when i specify MAX_GRANT_PERCENT = 10, 20 or even 50 per cent. The spill is prevented somewhere between 51 and 75 %. Why I find this so strange is that sys.dm_exec_query_memory_grants is showing max_used_memory as less than 5 GB.

    Are there intricacies, or even inadequacies, of sys.dm_exec_query_memory_grants I am overlooking?

    Needless to say, this unanticipated spilling may, after all, threaten the very feasibility of our plan to limit workspace memory per query ... along the lines of your original reply 🙂 .

    And, if I may, bonus Q2: How can i tell how much data is being spilled? I can see in the Actual Plan that it's spill level 1, and I can see that tempdb allocations, per sp_whoisactive, don't go over 500,000 pages (please note: it's not far off this when there's no spill). But is that a reliable way to understand the scale of the spill? In the plan, I don't see any of the stuff about how many threads spilled or how many pages were spilled. That's weird not to see that, right? The spill in this case is from a Hash Match, not the Sort that i originally suspected was the big workspace memory consumer.

    And bonus Q3: By the way, any ideas as to why Microsoft wouldn't expose a "Max memory per query (KB)" configuration setting, bearing in mind they have a "Min memory per query (KB)"? It's academic I know, but curiosity has forced me to ask this question!

    Thanks again.

  • That's stretching the inquiries appropriate for a free support forum there Jimmy! 🙂

    I am traveling today and will be pretty busy tomorrow with family stuff. But I will definitely try to get you a full reply within a few days.

    Some quickies for now:

    A) Can you put more than 1TB in the server?

    B) Why capping max at 840GB? Seems a bit steep for a set-aside to me at first blush. Got some other big-memory consumers running on the box I presume?

    C) IIRC there is a TF that can stop the NUMA memory localization. That could have a side effect (uncertain - never tested it and don't have source code access) that would result stopping the divide-by-four issue that is limiting your workspace memory.

    D) Might I suggest looking at bringing in a high-level sniper to give the system and code a performance review?

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Thanks again for these points.

    Regarding the current sql server max memory setting, there's a lot of history behind that setting value. But it's something we can revisit.

    Anyway, regarding my bonus question 1 .... I have been doing some more testing of MAX_GRANT_PERCENT. It's really interesting to see how this affects not only the amount requested and granted, but also the amount used.

    In one example, the original report (not specifying MAX_GRANT_PERCENT) requests 83 GB, gets a grant of 83 GB and uses (at its peak) 3 GB.

    Hence my keenness to limit the grant to something comfortably over 3 GB. I was hoping that MAX_GRANT_PERCENT = 10 would mean the report's SQL request would request 8 GB, get a grant of 8 GB and use 3 GB.

    However, instead, it requests 8 GB, gets a grant of 8 GB and uses (at its peak) 1 GB! The rest (roughly) gets spilled to tempdb.

    This has something to do with the memory fractions of each operator. We have some hash join operators that have access to just 1% of the overall memory grant. 1% of 83 GB may be enough for the operation. 1% of 8.3 GB is not. And this detail is concealed when looking merely at the max amount of memory used by a query.

    So, setting MAX_GRANT_PERCENT to a value greater than max_used_memory_kb in sys.dm_exec_query_memory_grants will not guarantee enough memory for each and every operator, and therefore it won't prevent spills to tempdb. That said, max_used_memory_kb would give you an idea of the total memory requirement of the original query, and the extent of spilling you might get, in the worst case scenario, if you were to use MAX_GRANT_PERCENT.

  • We have an issue whereby over-complex filters within queries generated by a report engine are resulting in wildly overestimated cardinality estimates.

    I see this often and capping the memory grants is not a solution, it is a futile suppression exercise. Strongly suggest spending the time and effort on three things, keep indices optimal, stats up to date and improve the report engine. Many of those so called report engines are horrendous in the way they generate code, one of the first time bombs to go off when the cardinality increases.

    😎

  • I didn't reread the thread in detail, but have you checked estimated and actual rows at various points in the query plan to see if something is causing them to be way out of whack and leading to requesting a large grant but not using it?

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Oh yes.

    In the few reports I have looked at, deep down in the plan there is typically a hash join that is estimating that millions of rows will come out of it, whereas in fact it is thousands.

    I have tended to blame this on complex filtering within the operators that precede the hash join, without taking a closer look.

    However, your question has prompted me to look deeper, taking one report as an example. In this one case (at least) there is one hash join that estimates 7 million rows will come out of the join, whereas in fact it's c. 200,000.

    And the inputs are reasonably accurate; it's the estimated output that is suspect.

    The hash is joining an intermediate resultset (derived from previous joins, itself with a reasonably accurate cardinality estimate) to a nonclustered index scan.

    It is taking c. 373,000 rows into the top of the hash table and joining to the result of the nonclustered index scan of table "B" which itself is returning c. 9,672,000 rows. The join is being performed on column [ID] which has approximately 3.17 rows in table B for every unique ID. That's the correct average distribution, and that's what the statistics object states (All Density on column ID).

    I thought that joins used All Density and so I expected the estimate to be somewhere in the order of 1,182,000.

    I don't know where this estimate of c. 20 rows in table B for column ID comes from.

    This overestimate essentially seeds an overall over-estimation that prevails throughout the plan and is further exacerbated to some extent by other operators.

  • By the way, there is no foreign key relationship between .[ID] and the table being joined to from the top input.

  • Jimmy M (11/15/2016)


    Oh yes.

    In the few reports I have looked at, deep down in the plan there is typically a hash join that is estimating that millions of rows will come out of it, whereas in fact it is thousands.

    I have tended to blame this on complex filtering within the operators that precede the hash join, without taking a closer look.

    However, your question has prompted me to look deeper, taking one report as an example. In this one case (at least) there is one hash join that estimates 7 million rows will come out of the join, whereas in fact it's c. 200,000.

    And the inputs are reasonably accurate; it's the estimated output that is suspect.

    The hash is joining an intermediate resultset (derived from previous joins, itself with a reasonably accurate cardinality estimate) to a nonclustered index scan.

    It is taking c. 373,000 rows into the top of the hash table and joining to the result of the nonclustered index scan of table "B" which itself is returning c. 9,672,000 rows. The join is being performed on column [ID] which has approximately 3.17 rows in table B for every unique ID. That's the correct average distribution, and that's what the statistics object states (All Density on column ID).

    I thought that joins used All Density and so I expected the estimate to be somewhere in the order of 1,182,000.

    I don't know where this estimate of c. 20 rows in table B for column ID comes from.

    This overestimate essentially seeds an overall over-estimation that prevails throughout the plan and is further exacerbated to some extent by other operators.

    There are a number of reasons why estimates can be not what you expect. Predicate filtering is an obvious one. Another could be that values may extend past the current set of values in the existing statistics. I don't know that all joins use all density as their main consideration for the optimization math.

    I have often found big performance wins by breaking down monster report queries into one-three intermediate temporary tables (NOT table variables) to give the optimizer a hand at getting the best plan in later operations. I don't know that you can do this with your reporting system however.

    Sadly I think we are past the point where a free forum can assist you with this issue, and probably have been since the very start. If you haven't yet you may want to consider a "sniper" consultant - someone that would be interested in parachuting in just to spend a few hours helping out with this issue. Sounds like an interesting environment to work with, so you should be able to find some takers.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply