Managing MaxMemory Values on an Active-Active Two Node Cluster

  • RML51

    SSCarpal Tunnel

    Points: 4410

    Comments posted to this topic are about the item Managing MaxMemory Values on an Active-Active Two Node Cluster

  • Prudhviraj

    SSC Veteran

    Points: 262

    That's a good article 🙂

    Can add one more line - set schedule for jobs 1,2 and 3 to one time(run on demand).

  • RML51

    SSCarpal Tunnel

    Points: 4410

    Prudhviraj (10/23/2013)


    That's a good article 🙂

    Can add one more line - set schedule for jobs 1,2 and 3 to one time(run on demand).

    Hi Prudhviraj!

    Glad to hear you liked the article.

    You certainly could create a job schedule (on demand) for those jobs, but I don't think it is necessary. They run when called by the "Alert and set MaxMemory on Restart" job. Would you please elaborate and state why you want to add an "on demand" schedule for those jobs?

    Mike

  • B's-Data

    SSChasing Mays

    Points: 607

    Interesting article. I'm questioning the comment regarding "wasting memory" when the max memory setting is split between the two instances. Technically you would NOT want to split that memory evenly as you could choke the OS. Agreed that if the second instance is not running on the other node, then it be construed that the memory is being wasted, however; SQL will only pull the memory that it needs. It does not automatically assign that max memory to the server and reserve it. Actually if the OS needs some, SQL will release the memory back to the OS. I have seen an instance not start because there was insufficient memory but that is not that easily reproducible. The min memory setting does however have the ability to cause a problem as you are referring to.

  • krystian

    Old Hand

    Points: 351

    Excellent article

  • colin.s.allen

    SSC Enthusiast

    Points: 126

    I would expect most instances to reach the MAXMEM setting at some point unless the DB is smaller than the memory available and SQL Server will not always give the memory back to the OS quickly enough, causing paging. Being cautious, I would be concerned about expecting two instances to run in half the memory they normally run in and would probably prefer them to run at the smaller setting permanently, with the option to increase it manually if you are doing something memory intensive as a special case.

  • SQLVoila

    Mr or Mrs. 500

    Points: 553

    Very interesting and great idea to assign max memory dynamically! Thanks for sharing.

    We currently have a cluster failover alert job runs every 1 minute to check whats the current physical node of each instance. A simple record is kept in a DBA database table. This way we do not have to deal with enable agent start-up job or using _xp to reach file system. I guess I could easily expand our routine to incorporate this procedure to assign max memory settings upon failover.

  • lptech

    Hall of Fame

    Points: 3188

    A couple of things caught my eye:

    1) The jobs would run if the cluster was actually running active/active with the instances running on the opposite nodes.

    2) After the failback, the job has to be manually run. This would add an extra thing to keep track of during routine maintenance such as applying security hotfixes.

    3) Keeping everything up-to-date on more than a few servers is a pain, and not likely to happen 100% of the time in a medium to large size shop

    In an ideal world, these jobs would be scripts that run off of alerts from the monitoring tool. The node and memory information could be kept in a Configuration management Database (CMDB), with blackouts in effect during maintenance windows. Of course, we don't always live in the ideal world.

  • RML51

    SSCarpal Tunnel

    Points: 4410

    lptech (10/24/2013)


    A couple of things caught my eye:

    1) The jobs would run if the cluster was actually running active/active with the instances running on the opposite nodes.

    2) After the failback, the job has to be manually run. This would add an extra thing to keep track of during routine maintenance such as applying security hotfixes.

    3) Keeping everything up-to-date on more than a few servers is a pain, and not likely to happen 100% of the time in a medium to large size shop

    In an ideal world, these jobs would be scripts that run off of alerts from the monitoring tool. The node and memory information could be kept in a Configuration management Database (CMDB), with blackouts in effect during maintenance windows. Of course, we don't always live in the ideal world.

    The job “Alert and Set MaxMemory on Restart” should be scheduled to “Start automatically whenever the SQL Server Agent starts”. That will fire off the jobs to set MaxMem whenever a fail-over occurs, or if you manually move an instance to a node. You should not have to manually run any of the jobs.

  • qiyuef

    SSC Enthusiast

    Points: 110

    There should be a logic in the job to check if the fail over happened. otherwise , it will keep reset the MAXMemory setting to the same value.

  • RML51

    SSCarpal Tunnel

    Points: 4410

    Andrew L. Smith-213145 (10/24/2013)


    What about the memory adjustments that each instance is supposed to do in the case of a fail-over to the same node? It is my understanding that each instance cooperates with each other and the OS to scale their memory usage to an acceptable level for each.

    I want to be able to control my memory to give more to my ProdDBs and less to my ReptDBs if they are both on the same node.

  • RML51

    SSCarpal Tunnel

    Points: 4410

    qiyuef (10/24/2013)


    There should be a logic in the job to check if the fail over happened. otherwise , it will keep reset the MAXMemory setting to the same value.

    The code to set the MaxMem will only run when the SQL Server Agent is started. Usually that doesn't happen too often.

  • pop022

    Old Hand

    Points: 324

    This was a great article. I implemented this on a cluster we are putting in production this weekend. I thoroughly tested it and it worked like a charm. This is a big help.

  • Henning Peter Jensen

    SSC Journeyman

    Points: 94

    Good idea - but you might want to adjust the max mem settings back to default if one of the two instances leaves the node. So if you add a schedule running hourly that resets the max memory to full capacity if the instances are running alone on the node, it will look almost perfect.

  • RML51

    SSCarpal Tunnel

    Points: 4410

    Henning Peter Jensen (10/26/2013)


    Good idea - but you might want to adjust the max mem settings back to default if one of the two instances leaves the node. So if you add a schedule running hourly that resets the max memory to full capacity if the instances are running alone on the node, it will look almost perfect.

    As soon as an instance is failed-back, so that each instance is each running on a separate node, these scripts/jobs will reset the max mem to their defined high values - in my examples to 245GB.

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply