Problem: Cluster not failing over after adding physical disk
Nodes: Active/Active, two node cluster each with an instance installed
Node1 has group A
Node2 has group B
Recent Event: I added a Physical Disk resource to group B on node 2 of the cluster
Story: I recently added a physical disk resource to group B on node 2 of a two node cluster. We are patching the servers and moved group A from node1 to node2, restarted node1 and moved group A back from node 2 to node 1.
An issue occurred when moving group B from node 2 to node 1. What happens is that the resources, SQL, IP… start moving over to node1 then the disk I just added fails and the whole group B moves back to node 2 (Fail Back).
I checked all the setting of the disk resource and they match perfectly with the others.
What I noticed is in computer management --disk manager of node 1 I do not see the disks for node2.
More specifically, in node 2 disk manager I can see all the local disk and the disks for node 1. The disk for node 1 are marked unreachable and have a red x. On node 1 I cannot see the disks for node 2 in the same way. I am only seeing the local disks on that node.
Another thing I noticed is that disk Manager on both nodes show a Disk 2
I rescanned disks on both server and still no luck.
Any help would be appreciated.