Can you set a pagefile more than 4GB?

  • Nitya (8/18/2010)


    We have twice as much of Memory allocated as pagefile.

    Is there a threshold or related impact on having such a setup?

    i know someone who used to set it with the minimum and maximum sizes being the same. i thought he was crazy and it used to annoy me until i read that it cuts down on fragmentation in an active paging file. downside is that if you need more space than what you assign, then it's going to slow the server down by a lot.

  • We have page file as 32gigs as opposed to 16gigs of memory

    Hope it doesn't do any harm

  • on our 64bit servers with 32GB of RAM i set it as 32GB - 64GB and don't remember the last time it grew past 32GB

  • 4x RAM is more than plenty. I think most machines get unstable when they start using more than 2x and machines with adequate ram will never hit 2x RAM on page file usage unless there is a major memory leak problem with the software (and the leaked ram is usually paged-out and 'forgotten about' if it is no longer referenced).

  • Hi Stanley,

    A very informative and detailed article. Just one point to add to keep the system running efficiently we keep pagefile size 1.5 times of RAM in most of the Windows 2000 and Windows 2003 kept in 2 partitions.

    Would like to know your opinion on this.

  • YeshuaAgapao (8/18/2010)


    I use the /PAE boot.ini switch on a 32-bit windows XP workstation with 4GB of RAM. It not only lets me use 3.25GB of that RAM, it lets me set the paging file to whatever size I want (I set to 8GB initial 16GB max)

    YeshuaAgapao,

    In a 32-bit windows XP, it has a limitation that it supports 3GB RAM only.

  • Guys,

    those pagefile discussion drives me crazy. First, most of the posts are off-topic. The discussion is about MEMORY DUMPS, not about paging file calculation. Second, most people don't know the internals about paging:

    - for all those who complained to need "hacks" and "tricks" and want to configure it to a minimum: Windows doesn't need a paging file.

    - for those of you who calculate the paging file based on RAM: This is a wrong approach. The paging file doesn't need to be "2x RAM or 4x RAM" - it just depends. Basically: the more RAM you have, the less paging file you need. If you configure your SQL Server in a reasonable way and have enough RAM, you can leave the paging file very small (and as I told in a previous post: Configure a kernel memory dump, that's enough in 98% of all cases).

    - for those of you who care about fragmentation in the PF: The performance drop WHEN the paging file is used (it won't be often!) is THAT high that fragmentation just doesn't matter.

    - Despite all myths about it: paging doesn't happen often. It's very hard to see (and DON'T take the hard faults as an indicator of paging!). It happens only if the system cache is empty AND if other processes demand much memory. Then the OS will trim the least heavily used pages from other processes and store them in the paging file.

    - A thumb rule: monitor your server look how the "peak commit limit" behaves. This is what might get paged out at MAX on your server to the paging file in the worst case. This number should be the basis of your calculation, not the size of RAM.

    - If you use AWE in your SQL Server to support more than , your pages will NEVER be paged out.

    - IF you set "lock pages in memory" your SQL Server pages will not get trimmed in case of high memory demand.

    regards

    Andreas

  • James Stephens (8/18/2010)


    This is good information to know. I hate to get negative on MS but I have to:

    Why should a professional server administrator need to resort to "tricks" and "hacks" to do something--not only normal--but *required* if one wants to actually support a mission-critical machine? Why couldn't one just set the pagefile size--or--better yet--have it default to the minimum required?

    My questions are rhetoric, this is just a head-shaker.

    ---Jim

    Jim,

    Thanks. I agree with you that it should not require tricks for normal operations.

  • manu.tiwari (8/18/2010)


    Hi Stanley,

    A very informative and detailed article. Just one point to add to keep the system running efficiently we keep pagefile size 1.5 times of RAM in most of the Windows 2000 and Windows 2003 kept in 2 partitions.

    Would like to know your opinion on this.

    Hi Manu,

    Well, it depends. Although Microsoft recommends we keep pagefile to 1.5 times of RAM, but as long as the pagefile is not being used frequently, it is fine. However, I set the pagefile to a fixed size when setup a new machine, so that the pagefile would not cause disk fragmentation.

  • ab.sqlservercentral (8/18/2010)


    Guys,

    those pagefile discussion drives me crazy. First, most of the posts are off-topic. The discussion is about MEMORY DUMPS, not about paging file calculation. Second, most people don't know the internals about paging:

    - for all those who complained to need "hacks" and "tricks" and want to configure it to a minimum: Windows doesn't need a paging file.

    - for those of you who calculate the paging file based on RAM: This is a wrong approach. The paging file doesn't need to be "2x RAM or 4x RAM" - it just depends. Basically: the more RAM you have, the less paging file you need. If you configure your SQL Server in a reasonable way and have enough RAM, you can leave the paging file very small (and as I told in a previous post: Configure a kernel memory dump, that's enough in 98% of all cases).

    - for those of you who care about fragmentation in the PF: The performance drop WHEN the paging file is used (it won't be often!) is THAT high that fragmentation just doesn't matter.

    - Despite all myths about it: paging doesn't happen often. It's very hard to see (and DON'T take the hard faults as an indicator of paging!). It happens only if the system cache is empty AND if other processes demand much memory. Then the OS will trim the least heavily used pages from other processes and store them in the paging file.

    - A thumb rule: monitor your server look how the "peak commit limit" behaves. This is what might get paged out at MAX on your server to the paging file in the worst case. This number should be the basis of your calculation, not the size of RAM.

    - If you use AWE in your SQL Server to support more than , your pages will NEVER be paged out.

    - IF you set "lock pages in memory" your SQL Server pages will not get trimmed in case of high memory demand.

    regards

    Andreas

    Andreas.

    Thanks for the clear and detailed clarification.

  • Why set a size and constrain the memory manager in the first place?. Set the pagefile as system managed, it will be set to an initial size and can grow with extents if required.

    And it's always a good idea to retain a pagefile, however much RAM you have since you'll never really have more than enough. Disabling it or setting it low means that the memory manager is constrained doing its job and is forced to keep unbacked pages in memory rather than page them out to the pagefile.

    I'd disagree as to the frequency of paging since pre-caching will tend to cause ununused memory pages to be dropped. The OS will only store non backed pages in the page file, memory pages that are backed on disk via memory mapped files, e.g. .exe, .dll files etc. have no need to be paged to the pagefile - pages will just be dropped as required and paged back in directly from the file.

    Pagefile fragmentation is also a non issue.

  • Magic Man (8/19/2010)


    Why set a size and constrain the memory manager in the first place?. Set the pagefile as system managed, it will be set to an initial size and can grow with extents if required.

    Because you will not get a valid memory dump in case you get a bluescreen. That's what the inital post was about.

  • ab.sqlservercentral (8/20/2010)


    Magic Man (8/19/2010)


    Why set a size and constrain the memory manager in the first place?. Set the pagefile as system managed, it will be set to an initial size and can grow with extents if required.

    Because you will not get a valid memory dump in case you get a bluescreen. That's what the inital post was about.

    You will because the OS will extend the pagefile size if needed to accomodate it.

  • Magic Man (8/19/2010)


    Why set a size and constrain the memory manager in the first place?. Set the pagefile as system managed, it will be set to an initial size and can grow with extents if required.

    And it's always a good idea to retain a pagefile, however much RAM you have since you'll never really have more than enough. Disabling it or setting it low means that the memory manager is constrained doing its job and is forced to keep unbacked pages in memory rather than page them out to the pagefile.

    I'd disagree as to the frequency of paging since pre-caching will tend to cause ununused memory pages to be dropped. The OS will only store non backed pages in the page file, memory pages that are backed on disk via memory mapped files, e.g. .exe, .dll files etc. have no need to be paged to the pagefile - pages will just be dropped as required and paged back in directly from the file.

    Pagefile fragmentation is also a non issue.

    Hi Magic Man,

    You have some points that to let system managed the pagefile.

    However, why do you think Pagefile fragmentation is not an issue?

  • Magic Man (8/20/2010)


    ab.sqlservercentral (8/20/2010)


    Magic Man (8/19/2010)


    Why set a size and constrain the memory manager in the first place?. Set the pagefile as system managed, it will be set to an initial size and can grow with extents if required.

    Because you will not get a valid memory dump in case you get a bluescreen. That's what the inital post was about.

    You will because the OS will extend the pagefile size if needed to accomodate it.

    Hi Magic Man,

    Since I did not let system manages the pagefile, I am not sure if the OS will extend the pagefile to larger or not when physical memory is larger than 4GB.

    Have you tried it?

Viewing 15 posts - 16 through 30 (of 36 total)

You must be logged in to reply to this topic. Login to reply