Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««34567»»

SQL Server, SANs and Virtualisation Expand / Collapse
Author
Message
Posted Sunday, January 11, 2009 5:05 PM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Monday, June 23, 2014 11:55 AM
Points: 1,708, Visits: 1,792
Perry Whittle (1/11/2009)
homebrew01 (1/10/2009)
Perry Whittle (1/10/2009)
homebrew01 (1/10/2009)
Our plan is to take our 2 heaviest hit SQL servers and put them onto their own physical machines running VM. They will have the VM overhead, but not sharing with any other servers. We like the VMotion DR type aspects.


if you're putting each VM onto a dedicated box of their own its pointless spending the $$$$$$$ on the vmware licences, they may just as well be physical sql servers straight off. No problem with storing a 1TB db on the SAN at all, better in fact i would think.


No DR or other advantages with using VM in this case ?

possibly but for the cost of the licences you could possibly find other ways. The general rule of thumb is if you get to the point where you dedicate a whole host to a VM it may as well be a physical machine


I absolutely agree. You would probably be better served clustering the two servers (HA) using boot from SAN technology and SAN mirroring (DR), and putting the additional cost of VM licensing into more RAM / stronger hardware and segmenting the databases into different instances on the cluster. You can't address the same amount of memory or processors in a VM that you can physically. This isn't a practical use of putting SQL in a VM, from two people who run SQL in VM's standpoints.


Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
My Blog | Twitter | MVP Profile
Training | Consulting | Become a SQLskills Insider
Troubleshooting SQL Server: A Guide for Accidental DBAs
Post #634429
Posted Sunday, January 11, 2009 5:11 PM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Monday, June 23, 2014 11:55 AM
Points: 1,708, Visits: 1,792
SQLBOT (1/11/2009)

SA can enable xp_cmdshell, so I don't get your point here.

I like what you're saying and it's worth noting that I think all SA access for vendors should be kept in check... sometimes it can't be. For me personally it's never worth the risk to multi-instance a machine if one of the instances has a user with SA.... ever.


My line of thinking is that if they require SA, I'm not going to bend over backward, but rather cut a new VM, wash my hands and let the infosec and OS teams monitor them like they do all other production systems.


~BOT


If that is an acceptable trade off for you and your company, and it works for you, then it is what it is. In SQL 2008, you can set a policy on the server that prevents enabling xp_cmdshell. Even if they do enable xp_cmdshell, if you separate Service accounts, they only get access to the level of priviledge of the service account, so the risk is minimum if you configure your service accounts properly.

For my company, there is no way that $5K per CPU license and the extra resources for a SQL Instance for one database does not make sense, especially in the tougher economic times we face, and I don't work for a small company. Any place that I can save/cut costs, is well worth it, especially when it requires less than a few hours of work forcing a software vendor to do what they should have done responsibly in the first place.


Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
My Blog | Twitter | MVP Profile
Training | Consulting | Become a SQLskills Insider
Troubleshooting SQL Server: A Guide for Accidental DBAs
Post #634430
Posted Sunday, January 11, 2009 5:24 PM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Monday, June 23, 2014 11:55 AM
Points: 1,708, Visits: 1,792
Perry Whittle (1/11/2009)
homebrew01 (1/10/2009)
Perry Whittle (1/10/2009)
homebrew01 (1/10/2009)
Our plan is to take our 2 heaviest hit SQL servers and put them onto their own physical machines running VM. They will have the VM overhead, but not sharing with any other servers. We like the VMotion DR type aspects.


if you're putting each VM onto a dedicated box of their own its pointless spending the $$$$$$$ on the vmware licences, they may just as well be physical sql servers straight off. No problem with storing a 1TB db on the SAN at all, better in fact i would think.


No DR or other advantages with using VM in this case ?

possibly but for the cost of the licences you could possibly find other ways. The general rule of thumb is if you get to the point where you dedicate a whole host to a VM it may as well be a physical machine


One other thing to test for your larger database is the effect of parallelism waits CXPACKET on performance. My experience with VM's is that parallel processing is a performance killer rather than a performance enhancer. The reason for this is that the VM host software is configured to manage context switching and the multiple threads are no longer going against multiple processors necessarily. Generally speaking, I see much better performance from my VM's be setting the Max Degree of Parallelism to 1. If your 1TB+ database currently benefits from parallel processing, you definately want to test what the performance will be like under a VM.


Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
My Blog | Twitter | MVP Profile
Training | Consulting | Become a SQLskills Insider
Troubleshooting SQL Server: A Guide for Accidental DBAs
Post #634434
Posted Monday, January 12, 2009 7:02 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Yesterday @ 6:55 AM
Points: 469, Visits: 261
Jonathan Kehayias (1/11/2009)

One other thing to test for your larger database is the effect of parallelism waits CXPACKET on performance. My experience with VM's is that parallel processing is a performance killer rather than a performance enhancer. The reason for this is that the VM host software is configured to manage context switching and the multiple threads are no longer going against multiple processors necessarily. Generally speaking, I see much better performance from my VM's be setting the Max Degree of Parallelism to 1. If your 1TB+ database currently benefits from parallel processing, you definitely want to test what the performance will be like under a VM.


I absolutely agree with this. Our Windows administrators thought the VM DR advantages were worth VM'ing one physical machine as one large VM machine. This was a server with several 100+ GB data marts and lots of parallel query plans. Performance was 25% to 400% slower in the VM world compared to the pure physical. Obviously your milage may vary, but bottom line is you should really test VM versus physical before you decide.

Having said that, SQL Server on VM works great for us in other areas. Generally these are smaller home grown or third party apps. Like one of the earlier posters, we find that many 3rd party apps assume that they are the only user of the DB server and expect sa user and xp_cmdshell. I'd like to say don't buy it if the vendor doesn't know how to code it responsibly, but I don't get to make all of those decisions, so we just VM it.
Post #634643
Posted Monday, January 12, 2009 8:00 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 10:52 AM
Points: 6,291, Visits: 13,552
its worth noting if you licence the cpus for Enterprise edition of SQL server at the host level, it will cover you for unlimited sql server cpu licences on any VM's on that host

-----------------------------------------------------------------------------------------------------------

"Ya can't make an omelette without breaking just a few eggs"
Post #634698
Posted Monday, January 12, 2009 8:42 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, December 6, 2013 8:32 AM
Points: 39, Visits: 298
John Beggs (1/11/2009)Keep in mind though, sharing spindles doesn't have to be bad. If you know the work load of the other devices sharing the spindles, you can gain from sharing. Which would you rather have, 10 dedicated spindles for your 1tb db or 90 shared spindles? Depends on how much load the other members that are sharing the spindles will add.


And it depends on the access pattern of your application. If your pattern is a SQL-based data warehouse, where you see very large sequential transfers, the 10 spindles as direct-attach storage may well beat the 90 spindles on a SAN. (It can be very, very expensive or even impossible to match the bandwidth of the direct-attach storage in the SAN solution.) I didn't believe this until I actually saw it! Of course, if your application is a transaction processing app with lots of small disk accesses, the SAN will handily beat the direct-attach....




The End.
Post #634738
Posted Monday, January 12, 2009 11:16 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Friday, December 27, 2013 9:53 AM
Points: 99, Visits: 637
Sir Slicendice (1/12/2009)
John Beggs (1/11/2009)Keep in mind though, sharing spindles doesn't have to be bad. If you know the work load of the other devices sharing the spindles, you can gain from sharing. Which would you rather have, 10 dedicated spindles for your 1tb db or 90 shared spindles? Depends on how much load the other members that are sharing the spindles will add.


And it depends on the access pattern of your application. If your pattern is a SQL-based data warehouse, where you see very large sequential transfers, the 10 spindles as direct-attach storage may well beat the 90 spindles on a SAN. (It can be very, very expensive or even impossible to match the bandwidth of the direct-attach storage in the SAN solution.) I didn't believe this until I actually saw it! Of course, if your application is a transaction processing app with lots of small disk accesses, the SAN will handily beat the direct-attach....



Yes, I think I said in one of my earlier posts (unless it didn't make it into the "final" draft...) that comparable DAS will always beat SAN storage. Though I am not sure the 10-90 example is comparable... ;)




Post #634872
Posted Monday, January 12, 2009 3:33 PM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Monday, June 9, 2014 6:02 AM
Points: 2,674, Visits: 697
see what a hornets nest this stirs up. I'm pretty sure iscsi sans work on 1gb ethernet, that's going to be one hell of a bottleneck. all luns across all disks eh? Yeah the one I can't name is one of those - so far all I have is inconsistant test results and I broke one san due to a lack of backplane bandwidth. 300 spindles was mentioned to me but I don't have the full spec sadly. Put it like this my dedicated luns on my current emc san at least give consistant results, all still outperformed by 4 x 10k sas disks in a raid 10.
vms I've used for dev and test systems where performance isn't critical - I have no issues with that at all. As to parallelism and vms - I think that has to be absolute vendor bull**** and as I have worked almost exclusively with multi proc boxes since sql 6.5 I can't find any truth with that at all. But I think we should leave that aspect out of this. Great thread people keep it up - fascinating!


The GrumpyOldDBA
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Post #635096
Posted Monday, January 12, 2009 4:02 PM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 10:52 AM
Points: 6,291, Visits: 13,552
colin Leversuch-Roberts (1/12/2009)
see what a hornets nest this stirs up.


you're not wrong there sir. A great thread going on here!
who will win the speed war copper or fibre optic?


-----------------------------------------------------------------------------------------------------------

"Ya can't make an omelette without breaking just a few eggs"
Post #635104
Posted Wednesday, January 21, 2009 6:26 AM
SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: 2 days ago @ 3:52 PM
Points: 5,991, Visits: 12,923
a long thread here but I don't think this has been asked\answered -

the article mentioned that VM streamlines the DR process, but for that to be fully true the SAN would have to be mirrored as well, yes?

for what its worth we have hit performance problems on VMWARE, this was with a fairly small (20GB ) but heavily used database. I was not involved in the design of the solution (sigh) but I understand it is an up to date version of vmware but all vms on the box were sharing the same LUN, so the problem was likely the SAN set up?


---------------------------------------------------------------------

Post #640635
« Prev Topic | Next Topic »

Add to briefcase «««34567»»

Permissions Expand / Collapse