PLE

  • Hi All,

    Appreciate if anybody can help understand why the physical memory is divided by 4 to determine ideal Page Life Expectancy.. Thank you..

  • Arsh - Saturday, August 26, 2017 8:35 AM

    Hi All,Appreciate if anybody can help understand why the physical memory is divided by 4 to determine ideal Page Life Expectancy.. Thank you..

    There's no good reason and the answer is probably wrong depending on your system.  At best, it's a SWAG.  Please see the following article.
    https://www.sqlskills.com/blogs/paul/page-life-expectancy-isnt-what-you-think/

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Arsh - Saturday, August 26, 2017 8:35 AM

    Appreciate if anybody can help understand why the physical memory is divided by 4 to determine ideal Page Life Expectancy..

    It's not.

    The ideal value for PLE is as high as possible. The lower it is, the more data is getting read off disk. If the SQL Server has 20GB allocated to it, and the PLE is 600, that means that, on average, 20GB of data is read from disk every 10 minutes (PLE is measured in seconds). Whether that's too much data being read depends on the capability of your IO subsystem.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Saturday, August 26, 2017 6:00 PM

    Arsh - Saturday, August 26, 2017 8:35 AM

    Appreciate if anybody can help understand why the physical memory is divided by 4 to determine ideal Page Life Expectancy..

    It's not.

    The ideal value for PLE is as high as possible. The lower it is, the more data is getting read off disk. If the SQL Server has 20GB allocated to it, and the PLE is 600, that means that, on average, 20GB of data is read from disk every 10 minutes (PLE is measured in seconds). Whether that's too much data being read depends on the capability of your IO subsystem.

    Thanks Jeff and Gail . Gail , in your example above , is the 20 GB 'allocated' or 'used' ?  What are the things one can do to increase the PLE ? 

    Thank you..Arshad

  • Arsh - Sunday, August 27, 2017 5:10 AM

     Gail , in your example above , is the 20 GB 'allocated' or 'used' ? 

    Max server memory setting.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Sunday, August 27, 2017 6:40 AM

    Arsh - Sunday, August 27, 2017 5:10 AM

     Gail , in your example above , is the 20 GB 'allocated' or 'used' ? 

    Max server memory setting.

    Thank you so much. I'll start separate thread to get help on increasing the PLE .

  • Increase PLE: Add memory or reduce need for memory by tuning queries or reducing data. It's not complicated.
    PLE is the measure of how long a page will stay in the buffer pool after being read in. So to increase it, either add memory so that pages stay longer, or reduce the need for memory so that pages stay longer.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Sunday, August 27, 2017 8:59 AM

    Increase PLE: Add memory or reduce need for memory by tuning queries or reducing data. It's not complicated.
    PLE is the measure of how long a page will stay in the buffer pool after being read in. So to increase it, either add memory so that pages stay longer, or reduce the need for memory so that pages stay longer.

    Thank you. One of the databases with 54 GB max memory out of 64 GB and with 600 GB size utilizing 18 processor cores (No NUMA)  is facing performance issues . Though the code is being looked along with creation of missing indexes , do you think the index fragmentation also is an issue here? Is the fragmented portion of the index also read into the memory ? thank you.

  • Arsh - Sunday, August 27, 2017 12:10 PM

    GilaMonster - Sunday, August 27, 2017 8:59 AM

    Increase PLE: Add memory or reduce need for memory by tuning queries or reducing data. It's not complicated.
    PLE is the measure of how long a page will stay in the buffer pool after being read in. So to increase it, either add memory so that pages stay longer, or reduce the need for memory so that pages stay longer.

    Thank you. One of the databases with 54 GB max memory out of 64 GB and with 600 GB size utilizing 18 processor cores (No NUMA)  is facing performance issues . Though the code is being looked along with creation of missing indexes , do you think the index fragmentation also is an issue here? Is the fragmented portion of the index also read into the memory ? thank you.

    On the missing index thing, possibly.  You have to look.

    On the index fragmentation thing, probably not.  I've not rebuilt indexes on my production box since 17 Jan 2016 (more than a year and a half ago) and, thanks to what I call a "Natural Fill Factor", performance markedly improved the first 3 months and has not degraded since then.  Except for an ever increasing Clustered Index, defragmenting indexes can actually cause more memory to be used because non-clustered indexes are almost never in the same order as the inserts and so become instantly and significantly fragmented when you add data to the table.  The fragmentation is due to page splits and so adding what seem like trivial amounts of data to the table can cause your NCIs to occupy twice as much memory as they would over time if you just stopped defragmenting indexes.  If you won't do that, then any index you build that isn't in the same order as your inserts will need to have an 80%-90% Fill Factor which, in itself, is a waste of space for those areas of the index that aren't a normal part of updates or inserts.

    What's really common is "one off" code where a single execution plan is made because of non-sargable queries or queries that require a new execution plan to be generated because the ORM code changed.  We recently had a nasty bit of code that took "only" 100 milliseconds to execute, but it recompiled every time it was called and it took 2 to 22 seconds to recompile.

    Also, if your connection strings have MARS enabled, turn that off NOW!  It doesn't work the way most people think it does and it can cause a whole lot of timeouts with rollbacks especially if you have code that takes 2 - 22 seconds to compile every time it's called.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thank you Jeff for the insights. I caught the point about the NCI's and the fragmented portion also occupying the memory.

    Slight confusion about the clustered indexes though..Should we anticipate any updates on them considering that they are made on columns that are unique valued like primary keys, some unique ID's, number etc .. ?

    and what are MARS in the connection strings you mentioned above ?

    Thank you..Arshad

  • Arsh - Sunday, August 27, 2017 12:10 PM

    do you think the index fragmentation also is an issue here?

    No.

    Is the fragmented portion of the index also read into the memory ?


    All data has to be in memory to be used by the query execution engine. It can't process directly from disk.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Arsh - Sunday, August 27, 2017 1:36 PM

    and what are MARS in the connection strings you mentioned above ?

    Something that is off by default and almost never used. I have never seen it in use on any client system.
    Google will get you details.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Sunday, August 27, 2017 2:06 PM

    Arsh - Sunday, August 27, 2017 1:36 PM

    and what are MARS in the connection strings you mentioned above ?

    Something that is off by default and almost never used. I have never seen it in use on any client system.
    Google will get you details.

    Thank you Gail. Will do that. Want to put one more point about this database . I checked the Buffer Cache Hit Ratio , and to my surprise its good at 99 to 100 on several runs but for the same DB ,  PLE is low . Any comments on this ?

  • First result for a google search for "buffer cache hit ratio"
    https://www.red-gate.com/simple-talk/sql/database-administration/great-sql-server-debates-buffer-cache-hit-ratio/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you Gail.

Viewing 15 posts - 1 through 15 (of 27 total)

You must be logged in to reply to this topic. Login to reply