Why every SQL Server installation should be a cluster

  • akljfhnlaflkj

    SSC Guru

    Points: 76202

    Good article, thanks.

  • Joaquin-435851

    Grasshopper

    Points: 11

    what is the appropriate way to deploy a wfc? File Share Witness, a Disk "SAN or ISCSI", or a 3rd server.. "Assuming you have 2 servers and need a witness".

    I know there are other ways / combinations but they all involve the above don't they?

    As far as "centralized Storage based on company policy" - I would say that any policy a company employs that is going to either restrict or make a solution more complicated should be revisited and adjusted. But I do understand it and have seen places that do it.. it just requires more hands on and complicates a WFC.

  • rstone

    SSCertifiable

    Points: 6011

    We had a SQL Server failover cluster for many years because we "wanted" it to be available at all times. We have many servers and the cluster turned out to be the most unreliable. There was one minor problem after another where the issue would require intervention to bring it back online. We moved off the cluster. We do need downtime for maintenance, but the maintenance is scheduled and manageable. We used to joke about how many eights we were getting on the cluster and how many years it would required to get back to a nine. Anyway, if you're government agency that can't afford to pay a DBA $10 an hour to be on call, then clustering - or HA - is probably not for you. A half-effort HA (HE-HA?) can be worse than no HA.

    Randy
    Helpdesk: Perhaps Im not the only one that does not know what you are doing. 😉

  • thecosmictrickster@gmail.com

    SSChampion

    Points: 10386

    Markus (4/15/2014)


    William Soranno (4/15/2014)


    Bill,

    Your senario might work if I had only a few databases to mirror and a few "applications" to change connection strings.

    I have 66 databases, and growing, on the cluster. I need to keep the instance name the same. There are probably just over 125 applications that connect to the cluster, not counting Sharepoint. Then there are the hundred plus data sources in Reporting Services.

    I would be lynched by the developers and the sys admins if we had to change the connection strings.

    We simply create dns alias names for all of the servers. That way when you replace the server or cluster you simply change the alias name to point to the new hardware. No one has to change anything then.

    We do this. CNAMEs are very useful. The only thing you may have to watch out for is if kerberos authentication is in use. I have come across a couple of cases where using a CNAME didn't work because the application was using kerberos; an[other] A record had to be created instead.



    Scott Duncan

    MARCUS. Why dost thou laugh? It fits not with this hour.
    TITUS. Why, I have not another tear to shed;
    --Titus Andronicus, William Shakespeare


  • thecosmictrickster@gmail.com

    SSChampion

    Points: 10386

    robert_verell (4/15/2014)


    -licensing with SQL 2012 is a pain point here too for physical clusters. Since you've installed SQL Server on the second (and n+ nodes) you have to license that box, which once again will sit idle and do nothing. For VM clusters this isn't as big of a deal since all of the cores will be licensed on the host anyway.

    Primary server licenses include support for one secondary server only, and any additional secondary servers

    must be licensed for SQL Server. So if you're only running a two-node AP cluster, you only need to license the active server (or the server with the larger number of cores, if they are not identical - which they should be).



    Scott Duncan

    MARCUS. Why dost thou laugh? It fits not with this hour.
    TITUS. Why, I have not another tear to shed;
    --Titus Andronicus, William Shakespeare


  • Recombinant

    SSC Enthusiast

    Points: 165

    If hardware is so reliable these days then a good strategy may be to have clusters with one new (supported, warrantied etc) server paired with an old (unsupported, out of warranty) server. If you keep the cluster running on the old box until its hardware fails, cutover to the new server, replace the old out of warranty server with another new box then you may be able to let the capacity demands of the business on the hardware dictate the replacement timetable rather than a 3 year warranty cycle. You might find yourself replacing hardware when it breaks, every 5-8 years, rather than when the warranty runs out. After all, why would you fix something that wasn't broken?

    (Edit: I have noticed that this article was originally posted a while ago!)

Viewing 6 posts - 76 through 81 (of 81 total)

You must be logged in to reply to this topic. Login to reply