Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

Getting the most out of your SQL Server 2008 Cluster Expand / Collapse
Author
Message
Posted Thursday, May 26, 2011 10:45 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 11:53 PM
Points: 6,192, Visits: 13,341
it doesn't however mention, Active\Active\Active\Passive.

Next time i build myself an 8 node cluster with 3 passive nodes it'll be an Active\Active\Active\Active\Active\Passive\Passive\Passive

i suppose.


-----------------------------------------------------------------------------------------------------------

"Ya can't make an omelette without breaking just a few eggs"
Post #1115679
Posted Thursday, May 26, 2011 10:52 AM
SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Wednesday, May 28, 2014 1:35 PM
Points: 1,635, Visits: 1,970
Thanks for the article. It's one I'll have to keep in my bag of tricks because it'll likely come in handy some day.
Post #1115685
Posted Thursday, May 26, 2011 11:24 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Friday, April 18, 2014 12:22 PM
Points: 10, Visits: 34
Actually, A/A and A/P are wrong terminology for SQL Server. I helped start that way back in the SQL Server 2000 days when I was still an employee of MS and wrote the SQL 2K failover clustering whitepaper.

The proper term is some variation of single instance failover cluster (or failover clustering instance) or multiple instance failover cluster. I've seen a variation with active in the name, I've seen where SQL Server is explicitly called out.

For SQL Server, A/P and A/A were holdovers from SQL Server 7.0 clustering where you literally could only have a maximum of two installs (and I'd rather try to block that ugliness out of my head). It makes no sense to say active/active/passive or a/a/a/a/a/p since that is probably not what it looks like. For example, you could have a three node cluster but have four instances running on two nodes. That isn't a/a/a/a/p.

When they introduced instancing in SQL Server 2000, we had talked about a name that made sense. My whitepaper was the first place to use the newer stuff and it's evolved a little over the years. Unfortunately, people are still clinging to A/P and A/A. Much like Windows clustering will always be referred to by some as MSCS. It is what it is.

Now you know ... the rest of the story.

So yes, I'm still a bit chuffed and annoyed people are using A/P and A/A. At this point it's like people spelling my name wrong (i.e. Alan, Allen). I try to ignore it but know that it is wrong terminology.

Also, N+1 comes from W2K Datacenter and you could do up to four nodes, boxes were smaller so having multiple instances were more of a challenge, and have a single dedicated failover node was a potential scenario. It evolved into N+i, where i is a number of dedicated failover nodes in the Windows failover cluster since it's easier now to have more than one dedicated failover node if you want. I believe the SQL 2K failover clustering wp was one of the first places to use that as well. That is proper use of N+i. I've never seen something like what Paul said with N-i or whatever.

Allan Hirt

PS - I won't shame anyone in a public talk if you use A/P or A/A. :)
PPS - I highly recommend if you want to see how complex a multiple instance topology can be, take a look at my whitepaper "Applying Updates to a Clustered Instance of SQL Server 2008 or SQL Server 2008 R2" which is linked from my blog post here.
Post #1115715
Posted Thursday, May 26, 2011 11:41 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, July 17, 2014 12:34 PM
Points: 1,414, Visits: 4,539
Lorenzo DBA (5/26/2011)
I would agree that Virtualization is the answer and is cost effective for both the hardware and licensing perspective. If you licence the by ESX host instead of the VM's, it is cheaper. We have a (3) node cluster composed of (3) VM servers. Windows Server 2008 R2 64 bit Enterprise with SQL Server 2008 R2 64 bit Enterprise. It runs perfectly and we use it for UAT Testing. Virtualization also offers an added HA layer as well as having SAN snapshots taken of each volume. We are very happy with the Clustered Virutal Machines we have.


if you need it only for testing and don't care about some management features there is a free version of ESXi server you can download. i was running it until a few days ago when i switched to hyper-v


https://plus.google.com/100125998302068852885/posts?hl=en
http://twitter.com/alent1234
x-box live gamertag: i am null
[url=http://live.xbox.com/en-US/MyXbox/Profile[/url]
Post #1115728
Posted Thursday, May 26, 2011 11:48 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, July 17, 2014 12:34 PM
Points: 1,414, Visits: 4,539
we're in the process of going to windows 2008 r2 and sql 2008 r2 or denali. we put windows 2008 r2/sql 2005 on our QA/UAT servers for the application testing. the real production machines that are clustered were untouched until the cut over. we had some issues and tested clustering and SQL on them prior to the move but then we completely wiped them and reinstalled everything to have a clean slate for production.

virtualizing SQL is OK but for the most critical machines you will want things like dual gigabit NIC's and having a second box in case the OS or a windows update screws up. in the last 5 years we used to cry when we had to reinitialize replication on some important tables. in a few cases it would take 11-12 hours. these days we can run a snapshot on the same tables in 5-10 minutes in some cases. we we had these on vmware they would be competing for resources with other instances

our newest SQL boxes have 72GB of RAM in each one


https://plus.google.com/100125998302068852885/posts?hl=en
http://twitter.com/alent1234
x-box live gamertag: i am null
[url=http://live.xbox.com/en-US/MyXbox/Profile[/url]
Post #1115734
Posted Friday, May 27, 2011 6:00 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Friday, May 17, 2013 8:23 AM
Points: 226, Visits: 155
Oscar Zamora (5/26/2011)
Correct me if I am wrong, but I see a combination of Enterprise Edition and Developer Edition binaries in Node 4. It seems that they can coexist with no problems. Is that accurate?

I do have MSDN licenses and might use them instead of Developer Edition Licenses. Wondering if the DLLs can coexist.


Hi Oscar,

As you install SQL 2008 R2 you have the option of choosing default or named instance. Choosing named instance will also give the option of a new folder to store SQL Server Binaries. This allows SQL server editions to happily coexist on the same Windows OS (be they Physical or Virtual servers).

I will say I was very impressed with the SQL Install for 2008 and 2008 R2 on Windows 2008 R2, vastly improved over Windows 2003 and SQL Server 2005. In fact a more recent cluster built, included 4 nodes and 7 instances of SQL server (dare I say Active / Passive? ), all installed without a single reboot.

The only time I did have to reboot these servers was when I moved the Page file of the C drive...
Post #1116164
Posted Friday, May 27, 2011 7:20 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Friday, May 17, 2013 8:23 AM
Points: 226, Visits: 155

Hi all,

Thanks for the feedback and to all that responded. This is my first article so somewhat green to some of the nuances or article writing.

So I thought I would take time to read your posts and see how the article fared.I managed to sprain my ankle quite badly yesterday so spent 4 hours in A&E reading your responses.

I must say it is with a wry smile that the big point for discussion is the Active / Passive terminology. I feel like I spent weeks building a Ferrari for people only to notice the tire tracks it leaves on the road are not quite right...

So Active / Active / Active / Passive or AP+N, I really don’t care, the purpose of this article was to convey my experiences with you all on this topic. As Paul said, both are in use everyday. The terminology I use is deliberately vanilla, so everyone can understand it as a community of varying skill DBA's. I am not writing an advanced book on clustering here, nor am I writing TechNet articles (which funnily enough state both terminologies on cluster configurations).

Safe to say, you all understood what I meant by Active / Active / Active / Passive cluster configuration. So did I get it wrong? No, from my point of view; - job done.

Secondly, Virtualisation crops up a few times as a "better solution". I do not disagree with Perry or others who mentioned this is a good way to skin this proverbial cat. But there are times when Virtualisation is just NOT a suitable. Working as a consultant, I have hit quite a few of them:

1. As mentioned by other posters, the requirements are too high scale to allow use of hypervisors.

2. Clients aversion to Hypervisors - I see this more and more recently, especially around databases. As a consultant, I can sing and dance the virtuosities of virtualisation, put the business case forward and demonstrate the technical implementations. But at the end of the day, if a client doesn’t not trust it, does not want it then I as their advisor look for another solution. There are a lot of POOR installations of hypervisors out there, is the software at fault? No. Implemented properly as mentioned in the posts this can be highly effective. But a lot of companies threw hypervisors at their datacenters with no through or planning, and like every framework, hypervisors require both. Once bitten, twice shy so the saying goes.

3. Costs. Virtualisation is not without its costs. Every host you carve in a hypervisor environment has a cost no matter how small. Best practice for clustering on VMWare states each node should be on an individual ESX server, so enterprise SQL licenses would compare to physical servers.
What about hidden costs? An interesting look, compare Instancing SQL Server against SQL on hypervisors. Each hypervisor SQL server requires C drive, free space on that drive, page file, Free RAM for OS... I’m not saying this is unacceptable in light of benefits to be gained, I am just saying, there is a different point of view to be had and should not be ignored.
What about training costs?, Monitoring costs? Support costs? This technology can very quickly underpin your infrastructure; you need to get it right...

Which leads me to:

4. Supportability. One valuable lesson I have learnt working as a consultant, you don’t aim for a successful project, you aim for a successful implementation for the lifetime of the client’s needs. What do i mean by this? While experts are engaged projects usually succeed. It’s when the experts roll off project, things start to go wrong. In that vein, everything myself and my organisation architect and implement must meet a single golden rule, it must be supportable by the client.

In the case of this article, my client did not have any VMWare or other hypervisor experience. They had no budget to train or desire to. The budget was spent when the need for UAT was surfaced, we had to use what was there, no money for extra’s like VMware.

So in summary, as I stated at the end of this article; this is just one way to achieve UAT at minimal expenditure. Is it the best? Probably not, there are more elegant ways to do this, but the important thing is here you can and this hasn’t been really documented before. I think this is tribute to the flexibility of Windows and SQL server, which is why I was happy to share it.

Again thanks for the feed back and taking time out of your busy day to read it. Hope you find it useful.


Post #1116221
Posted Friday, May 27, 2011 9:30 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 11:53 PM
Points: 6,192, Visits: 13,341
Mark Jones-393934 (5/27/2011)

1. As mentioned by other posters, the requirements are too high scale to allow use of hypervisors.

could you explain a little more what you mean by this



Mark Jones-393934 (5/27/2011)
Best practice for clustering on VMWare states each node should be on an individual ESX server, so enterprise SQL licenses would compare to physical servers.

ever heard of Cluster In a Box
you'd be surprised at what little hardware you need for a virtual SQL Server cluster (especially with ESXi) that would be suitable for Dev\UAT usage. Dev systems would also be covered under an MSDN subscription if you have one



Mark Jones-393934 (5/27/2011)

no money for extra’s like VMware.

VMWare Server 2.0.x and ESXi are free of charge

I have deployed a couple of dev\uat clusters using different editions of VMWare and they make excellent testbeds

BTW think i forgot to mention it originally, great article. Its good to read about others experiences


-----------------------------------------------------------------------------------------------------------

"Ya can't make an omelette without breaking just a few eggs"
Post #1116336
Posted Friday, May 27, 2011 10:01 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Friday, April 18, 2014 12:22 PM
Points: 10, Visits: 34
Mark Jones-393934 (5/27/2011)

So Active / Active / Active / Passive or AP+N, I really don’t care, the purpose of this article was to convey my experiences with you all on this topic. As Paul said, both are in use everyday. The terminology I use is deliberately vanilla, so everyone can understand it as a community of varying skill DBA's. I am not writing an advanced book on clustering here, nor am I writing TechNet articles (which funnily enough state both terminologies on cluster configurations).

Safe to say, you all understood what I meant by Active / Active / Active / Passive cluster configuration. So did I get it wrong? No, from my point of view; - job done.


Whether people got it is not the point. You are wrong. Period. Your article (and others - not just singling you out) is why there is such confusion around terminology with clusters. Your attitude is a bit flippant.

Mark Jones-393934 (5/27/2011)

Secondly, Virtualisation crops up a few times as a "better solution". I do not disagree with Perry or others who mentioned this is a good way to skin this proverbial cat. But there are times when Virtualisation is just NOT a suitable. Working as a consultant, I have hit quite a few of them:


I'm a consultant as well. Any deployment is an "it depends". There are rarely any absolutes.

Mark Jones-393934 (5/27/2011)

1. As mentioned by other posters, the requirements are too high scale to allow use of hypervisors.
2. Clients aversion to Hypervisors - I see this more and more recently, especially around databases. As a consultant, I can sing and dance the virtuosities of virtualisation, put the business case forward and demonstrate the technical implementations. But at the end of the day, if a client doesn’t not trust it, does not want it then I as their advisor look for another solution. There are a lot of POOR installations of hypervisors out there, is the software at fault? No. Implemented properly as mentioned in the posts this can be highly effective. But a lot of companies threw hypervisors at their datacenters with no through or planning, and like every framework, hypervisors require both. Once bitten, twice shy so the saying goes.


You could say the same for clusters, especially in architecting for the failover condition. I think I know just a wee bit about 'em (lol), but I see way more cluster hate stemming from bad experiences back in the 7 & 2000 days than I do aversion to virtualization. Virtualization is not initiated by DBAs; it's pushed on them by IT. On the flip side, it's up to the DBAs to know how to size and ask for what they need.

There are bad hypervisor implementations as there are bad physical hardware. Why do you think so many consolidated and did it to hypervisors? BECAUSE THEY BOUGHT BIG HW AND UNDERUTILIZED IT MORE OFTEN THAN NOT. Virtualization isn't the problem. People not doing their homework is.


Mark Jones-393934 (5/27/2011)

3. Costs. Virtualisation is not without its costs. Every host you carve in a hypervisor environment has a cost no matter how small. Best practice for clustering on VMWare states each node should be on an individual ESX server, so enterprise SQL licenses would compare to physical servers.
What about hidden costs? An interesting look, compare Instancing SQL Server against SQL on hypervisors. Each hypervisor SQL server requires C drive, free space on that drive, page file, Free RAM for OS... I’m not saying this is unacceptable in light of benefits to be gained, I am just saying, there is a different point of view to be had and should not be ignored.
What about training costs?, Monitoring costs? Support costs? This technology can very quickly underpin your infrastructure; you need to get it right...


I suggest you look again. In some scenarios, the costs associated with virtualization (especially if you are using it for dev or test) are WAY lower than running physical hardware. There are some places in prod as well. Everything you mention (monitoring, etc.) is the SAME for a physical environment. A VM is essentially the same as a physical server from, say, a monitoring perspective so it's not different. The only real difference is now monitoring the hypervisor and then possibly having to compare with what's going on in the VM. This isn't rocket science.

Your attitude strikes me of the way many thought 3 or 4 years ago with regards to virtualization of SQL Server. The space has matured quite a bit. Can it scale to the same as physical hardware for each VM? No. But some of the barriers are gone (especially around memory) with most hypervisors. Processor and potentially I/O are the bigger problem. You're not going to get large scale if you're processor bound with current versions of hypervisors. But that wouldn't be the scenarios you'd look to do, as you've said and I have above. It's all ... gasp ... about knowing your requirements. EVERY solution has tradeoffs to some degree because no one has unlimited time, resources (human or technological), or budget.

Mark Jones-393934 (5/27/2011)

Which leads me to:

4. Supportability. One valuable lesson I have learnt working as a consultant, you don’t aim for a successful project, you aim for a successful implementation for the lifetime of the client’s needs. What do i mean by this? While experts are engaged projects usually succeed. It’s when the experts roll off project, things start to go wrong. In that vein, everything myself and my organisation architect and implement must meet a single golden rule, it must be supportable by the client.

In the case of this article, my client did not have any VMWare or other hypervisor experience. They had no budget to train or desire to. The budget was spent when the need for UAT was surfaced, we had to use what was there, no money for extra’s like VMware.


Quite frankly, here's where I'll ding you: I would never suggest or recommend a client put any other environment than production on the same cluster. You're asking for trouble in many aspects. Might I suggest a quick look to another whitepaper that I wrote recently (see here) about having proper testing for releases, and that includes having the right environments. One of the biggest benefits of virtualization is dev and QA can trash their environments without affecting production. YOu can't just do that if they coexist on the same cluster or hardware if standalone.

As has been pointed out, cost is also relative because Hyper-V is built into Windows and you can get free versions of Vmware and other hypervisors.

When I work with clients, I never roll in with a "this is the right solution" attitude (not saying you do either). But there are times where it is worth trying to argue for what will be right long term. As a Cluster MVP, people assume that's the only architecture I'll present. They're wrong. Sometimes (but not always) the budgets can be amended. Education is an issue that can be solved. The bean counters are important, but so is the right solution (whatever that may be). Short term gain can lead to long term pain. It may be manageable now, but what about 24 months from now? That's where the rubber meets the road.

Supportability to me as a consultant is more than just "Can they manage this when I walk out the door" - is the architecture 100% supported by the vendors if they need to call? That to me is as big, if not a bigger issue that many overlook. When people start cutting costs and other things, corners can get sheared as well.
Post #1116367
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse