WSFC Node Majority with witness

  • Hello guys,

    Hope you all are doing fine.

    I've got a question related to WSFC configurations on your AG/FCI instances.

    how do you manage your quorum when all you have is 2 node WSFC?

    you would usually add a disk witness or file share witness or another node if you could afford that.

    but how you would manage your quorum when you have tens and tens of clusters?

    would you have one server with lots of shares in different disks to make a node majority? or maybe a few servers doing the same tasks?

    I'd like to hear about your experience regarding the planning and configuration of very large SQL Environments.

    Thanks for taking your time reading this post.

  • There is another option available - you can also use a cloud witness.

    With that said, for a 2-node cluster I would use the same thing regardless of type of cluster.  I would add a disk witness to every cluster using shared storage.  The cluster services and the disk witness can be hosted on either server and will automatically failover as needed.

    There are no issues with setting up shared storage for the quorum disk - and having non-shared storage for the AG's as long as both nodes in the cluster can access the same storage.

    If the secondary for the AG is not in the same DC - or cannot access the same storage, then a file or cloud witness would be required.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Thanks for your reply Jeffrey, so you would basically create a shared storage on either server of the cluster and use this one as a witness? and if it can failover to the other node that's a plus.

    Thanks for the feedback, really appreciated! 🙂

  • Not exactly - I would create shared storage and present that storage to every node in the cluster and allow it to failover to any available node.  If you assign it to a single node and that node is down you lose 2 votes - if the quorum drive fails over though, then you only lose the 1 vote for that single node.

    In a 2-node cluster you need 2 quorum votes for a healthy cluster - so losing 2 votes, cluster crashes.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply