Memory Management

  • I have a SQL Server 2012 server with SP1 - 64GB RAM, server is solely dedicated to SQL Server so it's grabbed just under 63GB of that.

    According to the count of records in sys.dm_os_buffer_descriptors there is 2598791 pages in the buffer - which equates to roughly 20GB of buffer pool by my calculations.

    The Memory Manager\Free Memory performance counter tells me there is over 33GB of unused memory yet at regular intervals throughout the day the page life expectancy plummets down to < 100 seconds where it will sit for a minute and then gradually start increasing again - sometimes it gets as far as what was normally its resting point of around 3,500 seconds, but sometimes it gets up to a few minutes and does its plummet again.

    Looking at sys.dm_exec_query_Stats I can't see any queries having hit the server that have done massive physical reads that would force a load of things out of the buffer - yet something is happening causing this.

    I'm wondering if I may be hitting some aspect of this bug as I'm also seeing some peaks of the lazy writes/sec counter so I've scheduled to install CU4 and see if that sorts things out.

    I'm no expert in SQL Server memory management so looking for some guidance here as to why SQL Server would be leaving over half of the available memory free. Are there some pertinent performance counters/DMV's that I'm not looking at which might shed some more light on the situation?

  • Do you have an excessive number of linked servers, CLRs etc? Multi-page allocations, BLOBs? Are you locking pages in memory?

    Try running select * from [sys].[dm_os_process_memory] - posting the results would be useful. Also, possibly posting the output from DBCC MEMORYSTATUS would be too although that's a lot more data.

  • 1) SP1 has a CRITICAL BUG that has a MANDATORY patch for it. You MUST at least apply that.

    2) SP1 has MULTIPLE (IIRC) memory leaks that will suck away your RAM over time. I STRONGLY recommend testing and patching to the latest CU!!

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • The condition you describe fits the SQLOS memory manager bug that is fixed in CU4 (http://support.microsoft.com/kb/2845380) but if you are going to take an outage to apply CU4, I would just go ahead and apply CU6 and bring your instance up to the latest update. You would have to track your resource monitor ring buffer entries to get more information about the internal low notification being set that is trimming memory internally to troubleshoot further, but it would generally be easier to test the CU on a test server under your normal workload to see if the behavior changes to what is expected in most cases.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Thanks guys - I had recommended that they apply the latest CU and I believe they applied CU6 last night. I'm on-site next at that client tomorrow so I'll check out their servers at that stage and see if things look any better and if the performance problems have eased off.

  • You have also over-allocated your server memory. On a 64GB server, you should not try to get SQL to use 63GB.

    There are many processes that need to use memory... start with the operating system, add anti-virus and other services, SQL Server processes that run outside the main SQL memory space, backups, etc, etc.

    A rough rule is to allow 20% of your server memory for processes outside the main SQL memory space, so on a 64GB server you should start by setting maximum SQL memory to 51GB. When you have some memory usage stats, you can fine-tune this setting, which may then move either up or down.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • GREAT catch Ed! I focused on just the memory leak situation and zoomed right over the obvious lack of MAX MEMORY cap! Personally I would start with something a bit higher than 51GB for a dedicated SQL Server with 64GB of RAM. I do 57GB as a starting point on SQL 2012 (where max is actually a MAX - as opposed to older versions) and monitor for excessive paging.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru (11/14/2013)


    GREAT catch Ed! I focused on just the memory leak situation and zoomed right over the obvious lack of MAX MEMORY cap! Personally I would start with something a bit higher than 51GB for a dedicated SQL Server with 64GB of RAM. I do 57GB as a starting point on SQL 2012 (where max is actually a MAX - as opposed to older versions) and monitor for excessive paging.

    Max Server Memory is not actually the max at all in SQL Server 2012. It is closer than previous versions due to the memory manager change to a unified "any size" page allocator for most internal components, but it doesn't account for thread call stack memory in SQLOS, or any allocations made from the VAS allocator directly through calls to VirtualAlloc, examples being SQLCLR, OLE Automation calls, extended procs or even inprocess linked server providers.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Thanks for all the comments guys - after they applied CU6 a couple of days ago I'm seeing PLE consistently at 1.5 hours, the buffer pool is no longer using only 20GB, it's taking 51GB with the SQL Server free memory counter down to 256MB. Much happier with that than I was with what I was seeing before and that obviously answers my question on why SQL Server was leaving such a large amount of free memory.

    To address the other points raised - max server memory is set at 60GB - 4GB should be enough IMO for the OS and other services. There is literally nothing else running on the server - any non-essential windows services are disabled, no additional software is installed, there is no virus scanner, things like ole automation, extended procs, linked servers are not used at all. CLR usage is minimal and restricted to admin jobs.

    In any event I take on board your suggestion of reducing the max setting and will monitor whether or not the server is being starved of memory for other processes and will consider reducing that if it is the case.

    Thanks again for the comments.

  • SQLDBAPerth (11/14/2013)


    To address the other points raised - max server memory is set at 60GB - 4GB should be enough IMO for the OS and other services. There is literally nothing else running on the server - any non-essential windows services are disabled, no additional software is installed, there is no virus scanner, things like ole automation, extended procs, linked servers are not used at all. CLR usage is minimal and restricted to admin jobs.

    What you aren't accounting for is that it takes memory in the Kernel to map the memory being used. The overhead in Kernel memory usage is roughly 7% on top of the other memory allocations for the OS and other services, so at 60GB, you are certainly going to be bumping up against memory pressure. My recommended calculation for a starting point for 'max server memory', is to reserve 1 GB of RAM for the OS, 1 GB for each 4 GB of RAM installed from 4–16 GB, and then 1 GB for every 8 GB RAM installed above 16 GB RAM. For a 64GB RAM server, this would result in 10GB of reserved memory and max server memory being configured at 54GB. From there I always say monitor Memory\Available MBytes over time and slowing increase 'max server memory' up from this base calculation which is conservative and will leave memory available.

    The problem with setting 'max server memory' too high is that SQLOS will respond to Windows memory pressure and shrink memory usage to return memory to Windows when needed. If you look at the ring_buffer entries you will see this with low memory notifications that have IsSystem = 2, and the process is consistently in a grow/shrink/grow/shrink/grow/shrink cycle that isn't ideal for caching or performance, since each low notification is a system wide external clock hand sweep across all caches forcing entry cleanup to release memory. It is much easier to see that you have extra available memory and can increase 'max server memory', it is harder to tell you set it to high and the process is growing and shrinking regularly.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Thanks Jonathan - I think on average I learn something new for every 5 words you write when you respond to something.

    I don't know if that's purely you being a good teacher or partly because I'm a dumbass.

    I assume what you are talking about when saying look at the ring buffer entries is checking out the sys.dm_os_ring_buffers at the RING_BUFFER_RESOURCE_MONITOR type. If that is correct and I'm looking at the data correctly, the RESOURCE_MEMPHYSICAL_LOW events all have an IndicatorsSystem value of 0 and an IndicatorsProcess value of 2. Based on what you are saying is this suggesting that we are not on this server experiencing any issues with a grow/shrink cycle and that those events seem to be limited to individual processes needing more memory? In this case the server has only been up for 2 days now after installing the CU6 update so it might not be long enough for it to get to the point that memory becomes a problem.

    Cheers,

    Paul.

  • It depends, and without a lot of added information I can't say for sure. You previously said that buffer pool was only using 51GB after the patch so you might not have ramped up completely or the workload just doesn't require all of the memory available in which case, this is all a wash. Your internal pressure or IsProcess = 2 for the low notifications are likely due to an ad hoc workload or plan caching issues. It could be something else, but this is most commonly the area that triggers internal memory pressure issues, and you might check your plan cache usage with the queries from Kimberly's blog post:

    http://www.sqlskills.com/blogs/kimberly/plan-cache-and-optimizing-for-adhoc-workloads/[/url]

    If you find heavy single use plans with lots of wasted cache space, 'optimize for adhoc workloads' should be enabled to reduce the effects on the server. This is the most common problem I see with IsProcess = 2 but others could still exist that aren't related to plan caching.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Optimise for ad-hoc is turned on on this server already - but there is still a lot of single use plans in the cache - presumably these have all been run twice so the second time the plan is persisted

    CacheType Total Plans Total MBs Avg Use Count Total MBs - USE Count 1 Total Plans - USE Count 1

    Proc 13721 3105.820312 31 842.914062 3282

    Prepared 14232 1502.687500 13 569.960937 4502

    Trigger 559 288.375000 12 91.898437 150

    Adhoc 2972 100.975097 5 46.225097 2035

    UsrTab 384 15.390625 56 0.523437 12

    Check 14 0.484375 2 0.187500 5

    View 1957 206.054687 3 0.000000 0

    I'll keep an eye on the memory and if it looks like it's having issues then I'll tune back the max memory setting but for the time being I think it looks like it might be ok.

    Thanks heaps for your advice Jonathan - very much appreciated.

  • The 'optimize for adhoc workloads' sp_configure option only affects the adhoc cache, so all of the other caches will still keep the full plan for single use executions, which is where more of the memory is going, not directly to the adhoc cache. There is no option for minimizing caching of single use procs or prepared plans, so this may just be the nature of things for your workload.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply