• Mark Jones-393934 (5/27/2011)


    So Active / Active / Active / Passive or AP+N, I really don’t care, the purpose of this article was to convey my experiences with you all on this topic. As Paul said, both are in use everyday. The terminology I use is deliberately vanilla, so everyone can understand it as a community of varying skill DBA's. I am not writing an advanced book on clustering here, nor am I writing TechNet articles (which funnily enough state both terminologies on cluster configurations). :w00t:

    Safe to say, you all understood what I meant by Active / Active / Active / Passive cluster configuration. So did I get it wrong? No, from my point of view; - job done. 😀

    Whether people got it is not the point. You are wrong. Period. Your article (and others - not just singling you out) is why there is such confusion around terminology with clusters. Your attitude is a bit flippant.

    Mark Jones-393934 (5/27/2011)


    Secondly, Virtualisation crops up a few times as a "better solution". I do not disagree with Perry or others who mentioned this is a good way to skin this proverbial cat. But there are times when Virtualisation is just NOT a suitable. Working as a consultant, I have hit quite a few of them:

    I'm a consultant as well. Any deployment is an "it depends". There are rarely any absolutes.

    Mark Jones-393934 (5/27/2011)


    1. As mentioned by other posters, the requirements are too high scale to allow use of hypervisors.

    2. Clients aversion to Hypervisors - I see this more and more recently, especially around databases. As a consultant, I can sing and dance the virtuosities of virtualisation, put the business case forward and demonstrate the technical implementations. But at the end of the day, if a client doesn’t not trust it, does not want it then I as their advisor look for another solution. There are a lot of POOR installations of hypervisors out there, is the software at fault? No. Implemented properly as mentioned in the posts this can be highly effective. But a lot of companies threw hypervisors at their datacenters with no through or planning, and like every framework, hypervisors require both. Once bitten, twice shy so the saying goes.

    You could say the same for clusters, especially in architecting for the failover condition. I think I know just a wee bit about 'em (lol), but I see way more cluster hate stemming from bad experiences back in the 7 & 2000 days than I do aversion to virtualization. Virtualization is not initiated by DBAs; it's pushed on them by IT. On the flip side, it's up to the DBAs to know how to size and ask for what they need.

    There are bad hypervisor implementations as there are bad physical hardware. Why do you think so many consolidated and did it to hypervisors? BECAUSE THEY BOUGHT BIG HW AND UNDERUTILIZED IT MORE OFTEN THAN NOT. Virtualization isn't the problem. People not doing their homework is.

    Mark Jones-393934 (5/27/2011)


    3. Costs. Virtualisation is not without its costs. Every host you carve in a hypervisor environment has a cost no matter how small. Best practice for clustering on VMWare states each node should be on an individual ESX server, so enterprise SQL licenses would compare to physical servers.

    What about hidden costs? An interesting look, compare Instancing SQL Server against SQL on hypervisors. Each hypervisor SQL server requires C drive, free space on that drive, page file, Free RAM for OS... I’m not saying this is unacceptable in light of benefits to be gained, I am just saying, there is a different point of view to be had and should not be ignored.

    What about training costs?, Monitoring costs? Support costs? This technology can very quickly underpin your infrastructure; you need to get it right...

    I suggest you look again. In some scenarios, the costs associated with virtualization (especially if you are using it for dev or test) are WAY lower than running physical hardware. There are some places in prod as well. Everything you mention (monitoring, etc.) is the SAME for a physical environment. A VM is essentially the same as a physical server from, say, a monitoring perspective so it's not different. The only real difference is now monitoring the hypervisor and then possibly having to compare with what's going on in the VM. This isn't rocket science.

    Your attitude strikes me of the way many thought 3 or 4 years ago with regards to virtualization of SQL Server. The space has matured quite a bit. Can it scale to the same as physical hardware for each VM? No. But some of the barriers are gone (especially around memory) with most hypervisors. Processor and potentially I/O are the bigger problem. You're not going to get large scale if you're processor bound with current versions of hypervisors. But that wouldn't be the scenarios you'd look to do, as you've said and I have above. It's all ... gasp ... about knowing your requirements. EVERY solution has tradeoffs to some degree because no one has unlimited time, resources (human or technological), or budget.

    Mark Jones-393934 (5/27/2011)


    Which leads me to:

    4. Supportability. One valuable lesson I have learnt working as a consultant, you don’t aim for a successful project, you aim for a successful implementation for the lifetime of the client’s needs. What do i mean by this? While experts are engaged projects usually succeed. It’s when the experts roll off project, things start to go wrong. In that vein, everything myself and my organisation architect and implement must meet a single golden rule, it must be supportable by the client.

    In the case of this article, my client did not have any VMWare or other hypervisor experience. They had no budget to train or desire to. The budget was spent when the need for UAT was surfaced, we had to use what was there, no money for extra’s like VMware.

    Quite frankly, here's where I'll ding you: I would never suggest or recommend a client put any other environment than production on the same cluster. You're asking for trouble in many aspects. Might I suggest a quick look to another whitepaper that I wrote recently (see here[/url]) about having proper testing for releases, and that includes having the right environments. One of the biggest benefits of virtualization is dev and QA can trash their environments without affecting production. YOu can't just do that if they coexist on the same cluster or hardware if standalone.

    As has been pointed out, cost is also relative because Hyper-V is built into Windows and you can get free versions of Vmware and other hypervisors.

    When I work with clients, I never roll in with a "this is the right solution" attitude (not saying you do either). But there are times where it is worth trying to argue for what will be right long term. As a Cluster MVP, people assume that's the only architecture I'll present. They're wrong. Sometimes (but not always) the budgets can be amended. Education is an issue that can be solved. The bean counters are important, but so is the right solution (whatever that may be). Short term gain can lead to long term pain. It may be manageable now, but what about 24 months from now? That's where the rubber meets the road.

    Supportability to me as a consultant is more than just "Can they manage this when I walk out the door" - is the architecture 100% supported by the vendors if they need to call? That to me is as big, if not a bigger issue that many overlook. When people start cutting costs and other things, corners can get sheared as well.