Resource utilization difference

  • Hi Experts,

    IT team gave us memory utilization report for each servers. In the list the memory utilization of SQL Server seems very low at 20% but when i checked in SQL Server it uses 61GB all the time out of total 64GB. Can experts please let me know why the difference?

    We are in process of creating a Capacity plan doc and for SQL Servers do we need to ask for more memory?

  • What is the exact name of the performance counter they're reporting on?

    61 out of 64GB seems rather high - you need to leave some for the operating system, plus more if the database engine isn't the only application or SQL Server component that is installed on the server.  There are a few resources that you could search for that will recommend how to calculate the max server memory for your server.  Don't just believe the first one - look at a few of them and go with the one you feel most comfortable with.

    John

  • I would ask how they calculate the percentage in the report as you see such a huge difference.

    That being said, do you have the "Lock Pages In Memory" security option set on your servers?  If so they may be getting the value the same way Task Manager gets the value, as with that setting enabled the process shows a significantly different value in Task Manager compare to what it is actually using.

    Do you need more memory, hard to detail, you will need to look at performance metrics overtime to see if the RAM you have is enough for your workload, typically things like constant cache flushes, page life expectancy etc.

    As a side note, what is your max memory set to on the instance?  Just curious as I generally leave 10% of RAM always rounded up to the next nearest GB or 4GB depending what the larger value is for the OS, so for 64GB I would set SQL to use 57GB, had issues in the past with setting it at 60GB and the buffer and procedure cache where constantly emptying even though SQL was utilizing the RAM.

  • John Mitchell-245523 - Wednesday, December 6, 2017 5:07 AM

    What is the exact name of the performance counter they're reporting on?

    61 out of 64GB seems rather high - you need to leave some for the operating system, plus more if the database engine isn't the only application or SQL Server component that is installed on the server.  There are a few resources that you could search for that will recommend how to calculate the max server memory for your server.  Don't just believe the first one - look at a few of them and go with the one you feel most comfortable with.

    John

    Thanks John. SQL Server memory is set 57GB out of 64GB

  • anthony.green - Wednesday, December 6, 2017 5:07 AM

    I would ask how they calculate the percentage in the report as you see such a huge difference.

    That being said, do you have the "Lock Pages In Memory" security option set on your servers?  If so they may be getting the value the same way Task Manager gets the value, as with that setting enabled the process shows a significantly different value in Task Manager compare to what it is actually using.

    Do you need more memory, hard to detail, you will need to look at performance metrics overtime to see if the RAM you have is enough for your workload, typically things like constant cache flushes, page life expectancy etc.

    As a side note, what is your max memory set to on the instance?  Just curious as I generally leave 10% of RAM always rounded up to the next nearest GB or 4GB depending what the larger value is for the OS, so for 64GB I would set SQL to use 57GB, had issues in the past with setting it at 60GB and the buffer and procedure cache where constantly emptying even though SQL was utilizing the RAM.

    Yes Anthony. Lock pages in Memory is enabled. We have set 57GB as the max memory for SQL Server. Todays value stays as below.

    Counter_Name    Memory_MB
    Target Server Memory     57984
    Total Server Memory      57091

    SQL  will use all that available gradually but that doesnt mean there is a memory pressure. Buffer cache hit ratio is 100%, PLE is 50% ,Available MBytes is 30%.

  • VastSQL - Wednesday, December 6, 2017 10:50 PM

    anthony.green - Wednesday, December 6, 2017 5:07 AM

    I would ask how they calculate the percentage in the report as you see such a huge difference.

    That being said, do you have the "Lock Pages In Memory" security option set on your servers?  If so they may be getting the value the same way Task Manager gets the value, as with that setting enabled the process shows a significantly different value in Task Manager compare to what it is actually using.

    Do you need more memory, hard to detail, you will need to look at performance metrics overtime to see if the RAM you have is enough for your workload, typically things like constant cache flushes, page life expectancy etc.

    As a side note, what is your max memory set to on the instance?  Just curious as I generally leave 10% of RAM always rounded up to the next nearest GB or 4GB depending what the larger value is for the OS, so for 64GB I would set SQL to use 57GB, had issues in the past with setting it at 60GB and the buffer and procedure cache where constantly emptying even though SQL was utilizing the RAM.

    Yes Anthony. Lock pages in Memory is enabled. We have set 57GB as the max memory for SQL Server. Todays value stays as below.

    Counter_Name    Memory_MB
    Target Server Memory     57984
    Total Server Memory      57091

    SQL  will use all that available gradually but that doesnt mean there is a memory pressure. Buffer cache hit ratio is 100%, PLE is 50% ,Available MBytes is 30%.

    PLE isn't a percentage, its a value in seconds, the lower the value the more page flushing is happening, the higher the less.  With BCH at 100% it seems you don't have a memory problem, but with a 1 value point snapshot, you never know.  You need to profile and baseline and check for fluctuations in the norm.

  • anthony.green - Thursday, December 7, 2017 1:24 AM

    VastSQL - Wednesday, December 6, 2017 10:50 PM

    anthony.green - Wednesday, December 6, 2017 5:07 AM

    I would ask how they calculate the percentage in the report as you see such a huge difference.

    That being said, do you have the "Lock Pages In Memory" security option set on your servers?  If so they may be getting the value the same way Task Manager gets the value, as with that setting enabled the process shows a significantly different value in Task Manager compare to what it is actually using.

    Do you need more memory, hard to detail, you will need to look at performance metrics overtime to see if the RAM you have is enough for your workload, typically things like constant cache flushes, page life expectancy etc.

    As a side note, what is your max memory set to on the instance?  Just curious as I generally leave 10% of RAM always rounded up to the next nearest GB or 4GB depending what the larger value is for the OS, so for 64GB I would set SQL to use 57GB, had issues in the past with setting it at 60GB and the buffer and procedure cache where constantly emptying even though SQL was utilizing the RAM.

    Yes Anthony. Lock pages in Memory is enabled. We have set 57GB as the max memory for SQL Server. Todays value stays as below.

    Counter_Name    Memory_MB
    Target Server Memory     57984
    Total Server Memory      57091

    SQL  will use all that available gradually but that doesnt mean there is a memory pressure. Buffer cache hit ratio is 100%, PLE is 50% ,Available MBytes is 30%.

    PLE isn't a percentage, its a value in seconds, the lower the value the more page flushing is happening, the higher the less.  With BCH at 100% it seems you don't have a memory problem, but with a 1 value point snapshot, you never know.  You need to profile and baseline and check for fluctuations in the norm.

    Thanks Anthony.

    PLE average value is showing as 59,142,961

  • VastSQL - Thursday, December 7, 2017 3:15 AM

    anthony.green - Thursday, December 7, 2017 1:24 AM

    VastSQL - Wednesday, December 6, 2017 10:50 PM

    anthony.green - Wednesday, December 6, 2017 5:07 AM

    I would ask how they calculate the percentage in the report as you see such a huge difference.

    That being said, do you have the "Lock Pages In Memory" security option set on your servers?  If so they may be getting the value the same way Task Manager gets the value, as with that setting enabled the process shows a significantly different value in Task Manager compare to what it is actually using.

    Do you need more memory, hard to detail, you will need to look at performance metrics overtime to see if the RAM you have is enough for your workload, typically things like constant cache flushes, page life expectancy etc.

    As a side note, what is your max memory set to on the instance?  Just curious as I generally leave 10% of RAM always rounded up to the next nearest GB or 4GB depending what the larger value is for the OS, so for 64GB I would set SQL to use 57GB, had issues in the past with setting it at 60GB and the buffer and procedure cache where constantly emptying even though SQL was utilizing the RAM.

    Yes Anthony. Lock pages in Memory is enabled. We have set 57GB as the max memory for SQL Server. Todays value stays as below.

    Counter_Name    Memory_MB
    Target Server Memory     57984
    Total Server Memory      57091

    SQL  will use all that available gradually but that doesnt mean there is a memory pressure. Buffer cache hit ratio is 100%, PLE is 50% ,Available MBytes is 30%.

    PLE isn't a percentage, its a value in seconds, the lower the value the more page flushing is happening, the higher the less.  With BCH at 100% it seems you don't have a memory problem, but with a 1 value point snapshot, you never know.  You need to profile and baseline and check for fluctuations in the norm.

    Thanks Anthony.

    PLE average value is showing as 59,142,961

    Again, goes back to my point, that's just a 1 value point in time, you need to profile, but that value is quite high, so an indication your ok, but again profile, baseline, and check for any anomalies, you never know what a daily, weekly, monthly, quarterly, yearly process can do to your counters.

  • anthony.green - Thursday, December 7, 2017 3:18 AM

    VastSQL - Thursday, December 7, 2017 3:15 AM

    anthony.green - Thursday, December 7, 2017 1:24 AM

    VastSQL - Wednesday, December 6, 2017 10:50 PM

    anthony.green - Wednesday, December 6, 2017 5:07 AM

    I would ask how they calculate the percentage in the report as you see such a huge difference.

    That being said, do you have the "Lock Pages In Memory" security option set on your servers?  If so they may be getting the value the same way Task Manager gets the value, as with that setting enabled the process shows a significantly different value in Task Manager compare to what it is actually using.

    Do you need more memory, hard to detail, you will need to look at performance metrics overtime to see if the RAM you have is enough for your workload, typically things like constant cache flushes, page life expectancy etc.

    As a side note, what is your max memory set to on the instance?  Just curious as I generally leave 10% of RAM always rounded up to the next nearest GB or 4GB depending what the larger value is for the OS, so for 64GB I would set SQL to use 57GB, had issues in the past with setting it at 60GB and the buffer and procedure cache where constantly emptying even though SQL was utilizing the RAM.

    Yes Anthony. Lock pages in Memory is enabled. We have set 57GB as the max memory for SQL Server. Todays value stays as below.

    Counter_Name    Memory_MB
    Target Server Memory     57984
    Total Server Memory      57091

    SQL  will use all that available gradually but that doesnt mean there is a memory pressure. Buffer cache hit ratio is 100%, PLE is 50% ,Available MBytes is 30%.

    PLE isn't a percentage, its a value in seconds, the lower the value the more page flushing is happening, the higher the less.  With BCH at 100% it seems you don't have a memory problem, but with a 1 value point snapshot, you never know.  You need to profile and baseline and check for fluctuations in the norm.

    Thanks Anthony.

    PLE average value is showing as 59,142,961

    Again, goes back to my point, that's just a 1 value point in time, you need to profile, but that value is quite high, so an indication your ok, but again profile, baseline, and check for any anomalies, you never know what a daily, weekly, monthly, quarterly, yearly process can do to your counters.

    Thanks Anthony. My Mistake the value is 59,142.961 that comes around 16 Hours. Thanks a lot for the advise.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply