Memory with a SQL Server 2008 R2 Cluster

  • We are running an Active/Active SQL Server Cluster. There are three instances on each node of the Cluster. Each node has 16 GB of memory. We are planning on maxing both nodes to 32 GB of RAM each. It is my understanding that the best practice is to take the Total Memory in a box, Divide that number by 2.... then subtract 2 GB for the operating system. This gives you the amount of memory that can be divided among the instances. I cannot see having 16 GB of memory sitting idle just in case of a failover.

    Also, when a failover occurs, do the instances that failover keep the memory settings they had on the failed node?... or do they just share from the memory that is sitting idle?

    Thank you in advance,

    Charlie

  • rummings (9/28/2011)


    We are running an Active/Active SQL Server Cluster. There are three instances on each node of the Cluster. Each node has 16 GB of memory. We are planning on maxing both nodes to 32 GB of RAM each. It is my understanding that the best practice is to take the Total Memory in a box, Divide that number by 2.... then subtract 2 GB for the operating system. This gives you the amount of memory that can be divided among the instances. I cannot see having 16 GB of memory sitting idle just in case of a failover.

    Also, when a failover occurs, do the instances that failover keep the memory settings they had on the failed node?... or do they just share from the memory that is sitting idle?

    Thank you in advance,

    Charlie

    Not sure about that calculation except the 2GB for the OS part, I agree with that. Based on what you say you have 2 nodes x 3 SQL instances/node = 6 SQL instances? Is that right?

    At failover max/min settings ARE kept, they are stored in master, however you don't want to allow over committment of memory which can lead to thrashing. What I mean is that if you have 32GB of memory, 30 available, you don't want to have 3 SQL instances which have a max of 16GB each (16GB x 3 = 48GB).

    I am a little unsure exactly what your question is, could you clarify?

    CEWII

  • Thank you for the information CEWII. I guess what I am asking is.... if by the formula I mentioned in the original post, I have (32/2)-2 or 14 GB of RAM to use between 3 instances on each of our cluster nodes.... and i allocate say 8 Gb to one instance.... 4 gb to another and 2 to the last.... with the 16 GB of RAM that is unused.... I would like to cheat the system and use more of it.... say 12 GB, 8 GB and 4 GB. I understand what you say about overextending and causing thrashing.... but I was hoping I could reset the memory if there was a failover. To date, all of our failovers have been ones that I have forced. Saying that... now one will occur at 3 am... Murphy's Law. Thanks again for replying!

    Charlie

  • That best practice rule sounds like the case of having 1 instance on each node and being able to run both in case of a failover, i.e.

    Two nodes

    32gb each

    Two instances

    Each instance would get 32/2-2 = 14gb, so in the case of a failover, both would be running with 14gb each, leaving 4gb free for the OS.

    While this is good in theory, you're throwing away good, usable memory in the mean time. A better solution would be to have alerts sent to you in case there's a failover so you can resolve it or adjust the memory manually.

    As far as how much ram you should allocate to each instance, that depends. Typically you'll have to play it by ear for a few days and adjust until they all have about equal Page Life Expectancy...or give more memory to your OLTP system and take a bit from a reporting instance, for example.

  • Derrick,

    I like that plan. Thank you for the information.

    Charlie

  • Ok, makes more sense..

    You need to make sure that you have enough memory left over for all instances to be running on the same node which is the reason for the division by 2, with that said I understand the desire to use all available memory, out of the box that isn't going to work. But you can build something..

    I've been working on a process using an SSIS package that is run at SQL start time that looks at what SQL instances are on what nodes and assigns max memory based on the mix on a single machine. My goal is to allow ALL memory on a machine to be used (except for 2GB) no matter what instances are running. I am using a "shares" model, meaning if I have 6 instances I assign each a number of shares, for important instances it might be 10, for least important 1-4. I sum up all the shares for all the instances running on a particular box and used the percentages of each to assign memory based on total memory. For example 5 instances, shares: 10, 4, 3, 2, 1, total 20 shares, the machine with 10 shares gets 50% of memory, the one with 4 shares gets 20% and so on..

    What I haven't had a chance to work out yet is a minimum that will always be granted, such as 1GB no matter how many shares assigned and better concurrency control.. I have also been looking at other methods such as multi-server jobs and linked servers..

    CEWII

  • Thanks for the additional information Elliott. That scenorio is exactly what I was hoping to achieve. I have not had the need to create SSIS packages as of yet, so I will need to look into this possibility now. Thanks again.

    Charlie

  • Myth: The sum of Max Server Memory for all instances in a cluster should be less than the physical memory of the node with the least memory



    Ole Kristian Velstadbråten Bangås - Virinco - Facebook - Twitter

    Concatenating Row Values in Transact-SQL[/url]

  • I have to challenge myth busted, the post does not bust the myth. The post DOES however have some good recommendations at the end.

    If you over commit memory you will end up with thrashing in nearly all cases. The post points out correctly that SQL dynamically allocates memory and will try to up to its max. But SQL having several instances fighting for memory cannot be considered a "good" thing. Whether you consider it myth busted it should still be considered and failure to do so will be at your own peril.

    CEWII

  • I am just curious to know the reason for opting 'Active/Active' cluster in place of 'Active/Passive' Cluster. It would be easy to manage and if I understand the reason for using clusters is to maximise the instance availibility not the cost saving.

    Would you please explain it?

  • Why A/A? For me it was performance, I have 2 instances and 2 machines, I can get max performance in the 99% case and am willing to accept degraded performance in the 1% case. What I won't accept is the server down case.

    CEWII

  • It is really all about coust versus benefit. If you need two instances, an so-called active/passive cluster would require 3 nodes. As such, you need enterprise edition of SQL Server, which is quite costly. It may be more cost efficient to scale up the hardware somewhat and run a two-node active/active cluster, which only need standard edition.



    Ole Kristian Velstadbråten Bangås - Virinco - Facebook - Twitter

    Concatenating Row Values in Transact-SQL[/url]

  • For memory judgment ,you have to schedule the next 2 steps:

    First, assign memory maximum to 30 GB RAM for each SQL instance as long as you run almost with active-active status.

    Arrange scheduled jobs to govern the memory through a daily memory recycling ..How..?

    First job :

    DBCC FREESYSTEMCACHE ('TokenAndPermUserStore')

    DBCC FREESESSIONCACHE

    Then

    EXEC sys.sp_configure N'max server memory (MB)', N1000'

    GO

    RECONFIGURE WITH OVERRIDE

    GO

    Second job:

    Schedule it after around 2 minutes till first one take place effectively ..

    This job just containing the below command

    EXEC sys.sp_configure N'max server memory (MB)', N'30000'

    GO

    RECONFIGURE WITH OVERRIDE

    GO:

    [font="Arial Black"]Performance Guard[/font]

    [font="Courier New"]Just ask me to get 0 sec for all queries[/font]

  • For memory judgment ,you have to schedule the next 2 steps:

    First, assign memory maximum to 30 GB RAM for each SQL instance as long as you run almost with active-active status.

    Arrange scheduled jobs to govern the memory through a daily memory recycling ..How..?

    First job :

    DBCC FREESYSTEMCACHE ('TokenAndPermUserStore')

    DBCC FREESESSIONCACHE

    Then

    EXEC sys.sp_configure N'max server memory (MB)', N1000'

    GO

    RECONFIGURE WITH OVERRIDE

    GO

    Second job:

    Schedule it after around 2 minutes till first one take place effectively ..

    This job just containing the below command

    EXEC sys.sp_configure N'max server memory (MB)', N'30000'

    GO

    RECONFIGURE WITH OVERRIDE

    GO:

    [font="Arial Black"]Performance Guard[/font]

    [font="Courier New"]Just ask me to get 0 sec for all queries[/font]

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply