SQL 2012 on Multiple Node FCI

  • I have recently moved into a new group of DBA and I am the odd-man out on some architectural decisions and I am struggling to defend my side of the argument. The company is upgrading to SQL 2012 from SQL 2008 R2. In my previous experiences with this I have set up Availability Groups on Windows Server 2012(R2) with an Active/Sync Replica in Primary Data Center and an ASYNC replica in the Second Data Center with excellent results in terms of performance and high availability.

    The load of the databases require 3 active servers. I understand the negative of the AlwaysON solution is storage footprint not failing over between servers but that is not a worry.

    My plan was have 9 Servers -

    3 Primary

    3 SYNC REPLICA

    3 ASYNC REPLICA in Second Data Center

    The Alternative Plan requires the same amount of Servers -

    6 NODE FCI in Primary Data Center, 3 Node FCI in Secondary Data center.

    My two main questions I am trying to ask are -

    1) What are the negative aspects of having SQL 2012 installed on WINDOWS 2008 R2?

    2) What are the positive aspects if any besides storage footprint about not using Availability Groups and sticking with an "oldschool" method of FCI and Mirroring?

  • Brent Ozar has a good blog post about running SQL 2012 on Windows Server 2008 and why Windows 2012 should be used:-

    http://www.brentozar.com/archive/2014/03/need-high-availability-sql-server-windows-server-2012/

  • gwellbrock (7/10/2014)


    The load of the databases require 3 active servers.

    Do you mean you distribute the databases between 3 separate SQL server instances?

    gwellbrock (7/10/2014)


    I understand the negative of the AlwaysON solution is storage footprint not failing over between servers but that is not a worry.

    The fact that AlwaysOn groups with no FCIs require no shared\replicated storage is a positive not a negative, you are removing the storage single point of failure.

    gwellbrock (7/10/2014)


    My plan was have 9 Servers -

    3 Primary

    3 SYNC REPLICA

    3 ASYNC REPLICA in Second Data Center

    The Alternative Plan requires the same amount of Servers -

    6 NODE FCI in Primary Data Center, 3 Node FCI in Secondary Data center.

    Trying to visualise here but not quite getting it. Can you provide a graphical representation of your design?

    gwellbrock (7/10/2014)


    1) What are the negative aspects of having SQL 2012 installed on WINDOWS 2008 R2?

    For clustered environments the choice should be Windows 2012 R2, there are a whole host of changes to clustering subsystem and network subsystem from Windows 2008 R2 to Windows 2012 to Windows 2012 R2

    gwellbrock (7/10/2014)


    2) What are the positive aspects if any besides storage footprint about not using Availability Groups and sticking with an "oldschool" method of FCI and Mirroring?

    Again, using an AlwaysOn group with no FCI requires no shared storage, this is a positive.

    If you're using AlwaysOn groups and any of your replicas are FCIs then you introduce the storage single point of failure, this is a negative

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Sorry if was confusing, I meant at any one time 3 nodes need be active to support the database environment. In short I am trying to convince my peers to use the ALWAYSon solution below but am having troubles convincing why a 6 node cluster is a bad idea. They are happy with their current 6 node cluster set-up as they have had no downtime with it and are having trouble moving towards the new technology despite the obvious benefits of reporting secondary replica, etc...

    ALWAYS ON

    *Mirroring used in FCI to get data to Secondary Data Center

    FCI

  • One con of using AG for HA: HADR_SYNC_COMMIT waits. We cannot rebuild AG indexes in sync mode, have to switch to async, else blocking is intolerable. FCI achieves HA much more cheaply in terms of performance (and storage, in our case).

    After over 2yrs of prod config using AGs for HA, we're in process of moving back to FCIs for HA. This is not a trivial exercise, would have been much simpler if we had just started this way.

    Having said all this, we are still fans of AGs for readable secondary and DR. I would seriously consider using AGs instead of plain mirroring with your FCI solution, if you wind up going that route.

  • Mike: We've hit the same problem with online index rebuilds. When we rebuild indexes in async mode, our application transactions go through (as expected, no HADR_SYNC_COMMIT waits), but we cannot rebuild our indexes in sync mode due to blocking caused by HADR_SYNC_COMMIT waits. We've got identical servers and dedicated storage and this problem occurs even if we switch nodes between Primary and Replica or even test in a different data center. We have enterprise-grade SAN that has stood the test of large I/O for data warehouses, etc, but we're still looking into different storage to confirm/eliminate storage as the bottleneck. Was storage an issue for you when you ran into this problem?

  • We also have good enterprise SAN (vmax) with many fast spindles, RAID 10, flash caching, fibre-connected.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply