Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Managing MaxMemory Values on an Active-Active Two Node Cluster Expand / Collapse
Author
Message
Posted Wednesday, October 23, 2013 9:38 PM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Tuesday, February 4, 2014 9:35 AM
Points: 1,277, Visits: 1,612
Comments posted to this topic are about the item Managing MaxMemory Values on an Active-Active Two Node Cluster


Post #1507871
Posted Wednesday, October 23, 2013 10:14 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Wednesday, March 26, 2014 2:52 PM
Points: 46, Visits: 87
That's a good article

Can add one more line - set schedule for jobs 1,2 and 3 to one time(run on demand).
Post #1507873
Posted Thursday, October 24, 2013 6:05 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Tuesday, February 4, 2014 9:35 AM
Points: 1,277, Visits: 1,612
Prudhviraj (10/23/2013)
That's a good article

Can add one more line - set schedule for jobs 1,2 and 3 to one time(run on demand).

Hi Prudhviraj!

Glad to hear you liked the article.

You certainly could create a job schedule (on demand) for those jobs, but I don't think it is necessary. They run when called by the "Alert and set MaxMemory on Restart" job. Would you please elaborate and state why you want to add an "on demand" schedule for those jobs?

Mike



Post #1507991
Posted Thursday, October 24, 2013 8:15 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Tuesday, November 18, 2014 4:46 PM
Points: 204, Visits: 386
Interesting article. I'm questioning the comment regarding "wasting memory" when the max memory setting is split between the two instances. Technically you would NOT want to split that memory evenly as you could choke the OS. Agreed that if the second instance is not running on the other node, then it be construed that the memory is being wasted, however; SQL will only pull the memory that it needs. It does not automatically assign that max memory to the server and reserve it. Actually if the OS needs some, SQL will release the memory back to the OS. I have seen an instance not start because there was insufficient memory but that is not that easily reproducible. The min memory setting does however have the ability to cause a problem as you are referring to.
Post #1508072
Posted Thursday, October 24, 2013 9:32 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, November 20, 2014 12:58 PM
Points: 25, Visits: 67
Excellent article
Post #1508126
Posted Thursday, October 24, 2013 10:03 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Today @ 7:25 AM
Points: 17, Visits: 286
I would expect most instances to reach the MAXMEM setting at some point unless the DB is smaller than the memory available and SQL Server will not always give the memory back to the OS quickly enough, causing paging. Being cautious, I would be concerned about expecting two instances to run in half the memory they normally run in and would probably prefer them to run at the smaller setting permanently, with the option to increase it manually if you are doing something memory intensive as a special case.
Post #1508138
Posted Thursday, October 24, 2013 10:25 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Yesterday @ 8:13 AM
Points: 106, Visits: 169
Very interesting and great idea to assign max memory dynamically! Thanks for sharing.

We currently have a cluster failover alert job runs every 1 minute to check whats the current physical node of each instance. A simple record is kept in a DBA database table. This way we do not have to deal with enable agent start-up job or using _xp to reach file system. I guess I could easily expand our routine to incorporate this procedure to assign max memory settings upon failover.
Post #1508152
Posted Thursday, October 24, 2013 11:09 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Yesterday @ 11:16 AM
Points: 235, Visits: 2,252
A couple of things caught my eye:

1) The jobs would run if the cluster was actually running active/active with the instances running on the opposite nodes.

2) After the failback, the job has to be manually run. This would add an extra thing to keep track of during routine maintenance such as applying security hotfixes.

3) Keeping everything up-to-date on more than a few servers is a pain, and not likely to happen 100% of the time in a medium to large size shop

In an ideal world, these jobs would be scripts that run off of alerts from the monitoring tool. The node and memory information could be kept in a Configuration management Database (CMDB), with blackouts in effect during maintenance windows. Of course, we don't always live in the ideal world.
Post #1508182
Posted Thursday, October 24, 2013 11:20 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Tuesday, February 4, 2014 9:35 AM
Points: 1,277, Visits: 1,612
lptech (10/24/2013)
A couple of things caught my eye:

1) The jobs would run if the cluster was actually running active/active with the instances running on the opposite nodes.

2) After the failback, the job has to be manually run. This would add an extra thing to keep track of during routine maintenance such as applying security hotfixes.

3) Keeping everything up-to-date on more than a few servers is a pain, and not likely to happen 100% of the time in a medium to large size shop

In an ideal world, these jobs would be scripts that run off of alerts from the monitoring tool. The node and memory information could be kept in a Configuration management Database (CMDB), with blackouts in effect during maintenance windows. Of course, we don't always live in the ideal world.


The job “Alert and Set MaxMemory on Restart” should be scheduled to “Start automatically whenever the SQL Server Agent starts”. That will fire off the jobs to set MaxMem whenever a fail-over occurs, or if you manually move an instance to a node. You should not have to manually run any of the jobs.



Post #1508189
Posted Thursday, October 24, 2013 12:40 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Saturday, January 4, 2014 9:29 AM
Points: 22, Visits: 53
What about the memory adjustments that each instance is supposed to do in the case of a fail-over to the same node? It is my understanding that each instance cooperates with each other and the OS to scale their memory usage to an acceptable level for each.
Post #1508222
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse