Quorum with two node Availability Group

  • Hello,

    I have two node cluster with non-shared storage setup in my TEST environment in one data center. I successfully setup Availity group with a test database between these two nodes. The quorum model is a Node and Fileshare Majority. However, anytime when I failover availability group to other node and it says "This quorum configuration is not recommended".

    What type of quorum model exactly do I need for this two node cluster?

    Do I need to give a vote to both nodes?

    Please advise.

    Thanks much

    Attopeu,

  • Check the quorum configuration via always on dashboard. It will show you number of votes for each of the nodes and file share witness. All should have 1 vote.

    Also you can stop cluster / SQL service on your primary replica and check if automatic failover is happening. As long as your failover is working fine, you need not worry.

    You might need to install a hotfix to assign votes to nodes if they don't hv votes.

    Let me know how it goes.

  • Attopeu (12/11/2012)


    Hello,

    I have two node cluster with non-shared storage setup in my TEST environment in one data center. I successfully setup Availity group with a test database between these two nodes. The quorum model is a Node and Fileshare Majority. However, anytime when I failover availability group to other node and it says "This quorum configuration is not recommended".

    What type of quorum model exactly do I need for this two node cluster?

    The cluster quorum models are as follows

    • Node Majority (recommended for clusters with an odd number of nodes)

      Can sustain failures of half the nodes (rounding up) minus one. For example, a seven node cluster can sustain three node failures.

    • Node and Disk Majority (recommended for clusters with an even number of nodes)

      Can sustain failures of half the nodes (rounding up) if the disk witness remains online. For example, a six node cluster in which the disk witness is online could sustain three node failures.

      Can sustain failures of half the nodes (rounding up) minus one if the disk witness goes offline or fails. For example, a six node cluster with a failed disk witness could sustain two (3-1=2) node failures.

    • Node and File Share Majority (for clusters with special configurations)

      Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.

      Note that if you use Node and File Share Majority, at least one of the available cluster nodes must contain a current copy of the cluster configuration before you can start the cluster. Otherwise, you must force the starting of the cluster through a particular node. For more information, see "Additional considerations" in Start or Stop the Cluster Service on a Cluster Node.

    • No Majority: Disk Only (not recommended)

      Can sustain failures of all nodes except one (if the disk is online). However, this configuration is not recommended because the disk might be a single point of failure.

    The quorum type you are using is for special cluster configurations spanning multiple sites (geographically dispersed). You should be using Node and disk or disk only, although disk only is no longer recommended.

    Attopeu (12/11/2012)


    Do I need to give a vote to both nodes?

    Please advise.

    Thanks much

    Attopeu,

    Yes, the option of removing votes from nodes is usually employed to avoid having a DR site becoming vote heavy in a cluster.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply