Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

multiple instance sql cluster with different windows configuration Expand / Collapse
Author
Message
Posted Friday, April 12, 2013 10:02 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 8:13 PM
Points: 124, Visits: 750
Perry,

Do we need to have multiple witness disks if we want to go with this type of setting?
Post #1441780
Posted Friday, April 12, 2013 10:07 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 12:28 PM
Points: 6,518, Visits: 14,038
deep_kkumar (4/12/2013)
Perry,

Do we need to have multiple witness disks if we want to go with this type of setting?

A cluster will have only 1 witness if one is required, this can be either a disk or fileshare.
A cluster with an odd number of nodes does not traditionally have a witness.



-----------------------------------------------------------------------------------------------------------

"Ya can't make an omelette without breaking just a few eggs"
Post #1441784
Posted Friday, April 12, 2013 10:32 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 8:13 PM
Points: 124, Visits: 750
Thanks Perry. Too many questions. I have setup a two node cluster and now adding a third node to it. I have asked for 3 drives for system db files, used db data files and user db log files for second instance. So we need to share the other drive volumes existing on other two nodes and these three new volumes to be shared between on all three nodes. Right? So that we can failover to any node we want to if needed. Correct me if I am wrong.

And also we have to select the same MSDTC and Quorum drive used in other two node cluster while installing the failover cluster on the third node.
Post #1441793
Posted Friday, April 12, 2013 11:11 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 12:28 PM
Points: 6,518, Visits: 14,038
deep_kkumar (4/12/2013)
Thanks Perry. Too many questions.

You're welcome, i don't mind, really.

deep_kkumar (4/12/2013)
I have setup a two node cluster and now adding a third node to it. I have asked for 3 drives for system db files, used db data files and user db log files for second instance. So we need to share the other drive volumes existing on other two nodes and these three new volumes to be shared between on all three nodes. Right? So that we can failover to any node we want to if needed. Correct me if I am wrong.

It all depends on how you want your cluster to operate and respond to failures.

Say you have a 3 node cluster NodeA,NodeB,NodeC.

Instance 1 is installed on NodeA and NodeC
Instance 2 is installed on NodeB and NodeC

You would typically mask or zone the storage for instance1 from NodeB and likewise instance2 from NodeA.

This design may not suit you at all, there are many ways of achieving a highly available SQL Server system, how do you want it to respond?


deep_kkumar (4/12/2013)
And also we have to select the same MSDTC and Quorum drive used in other two node cluster while installing the failover cluster on the third node.

If you're using a disk witness this storage device obviously has to be available on every node in the cluster. The following services are in my opinion the common resources you would fail on any node in a multi node cluster

Msdtc
File
Print


-----------------------------------------------------------------------------------------------------------

"Ya can't make an omelette without breaking just a few eggs"
Post #1441800
Posted Friday, April 12, 2013 11:33 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 8:13 PM
Points: 124, Visits: 750
Thanks a lot Perry. I really appreciate your help.

If possible, I want to make all the disks available on all three nodes and introduce the affinity rule so that one instance running on node B and node c never fails over to node A. I also like your idea of masking the disks. Which one you suggest ?

Actually my requirement is to setup 4 node cluster with instance 1 to run on node A and node B. Instance 2 runnig on node B and node C. Instance3 running on node D and node B.
Post #1441818
Posted Friday, April 12, 2013 11:53 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 12:28 PM
Points: 6,518, Visits: 14,038
deep_kkumar (4/12/2013)
so that one instance running on node B and node c never fails over to node A.

Simple don't install the instance to NodeA

If you do install the instance to NodeA, just ensure you remove NodeA as a possible owner on the cluster groups virtual network name resource for the instance in question.


deep_kkumar (4/12/2013)
I also like your idea of masking the disks. Which one you suggest ?

Not sure what you mean here.


deep_kkumar (4/12/2013)
Actually my requirement is to setup 4 node cluster with instance 1 to run on node A and node B. Instance 2 runnig on node B and node C. Instance3 running on node D and node B.

You'll need either a disk resource, if all nodes are on the same site, or a fileshare if they're geographically dispersed.

Hmm, the following solution would also be viable as follows;

Instance 1 on NodeA and NodeD
Instance 2 on NodeB and NodeD
Instance 3 on NodeC and NodeD

Make NodeD a higher spec\capacity box than the other nodes so that it can handle all resources failing over at once if they ever needed to.


-----------------------------------------------------------------------------------------------------------

"Ya can't make an omelette without breaking just a few eggs"
Post #1441827
Posted Friday, April 12, 2013 12:36 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 8:13 PM
Points: 124, Visits: 750
In my case Node A and Node B are very high capacity servers. On these nodes we are running our most critical instance(instance1). Currently the instance1 is running on Node A. Other two instances are for reporting purposes, so we are planning to run these 2 instances(intance2 and instance3) on node B. If node A fails for some reason, it should failover to Node B. In that case all three instances will be running on Node B. If needed we can manually failover the other two reporting instances running on node B to node C and node D respectively if we have any resource allocation issue.
Post #1441845
Posted Monday, April 29, 2013 9:16 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 8:13 PM
Points: 124, Visits: 750
I was able to add new node to the existing cluster. But I am lost on how to install sql instance on this node. Do I need to specify the same cluster network name ?? I have a new cluster name which I would like to use for this new instance. How to proceed??
Post #1447599
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse