SQL Server, SANs and Virtualisation

  • homebrew01 (1/10/2009)


    Our plan is to take our 2 heaviest hit SQL servers and put them onto their own physical machines running VM. They will have the VM overhead, but not sharing with any other servers. We like the VMotion DR type aspects.

    if you're putting each VM onto a dedicated box of their own its pointless spending the $$$$$$$ on the vmware licences, they may just as well be physical sql servers straight off. No problem with storing a 1TB db on the SAN at all, better in fact i would think.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle (1/10/2009)


    homebrew01 (1/10/2009)


    Our plan is to take our 2 heaviest hit SQL servers and put them onto their own physical machines running VM. They will have the VM overhead, but not sharing with any other servers. We like the VMotion DR type aspects.

    if you're putting each VM onto a dedicated box of their own its pointless spending the $$$$$$$ on the vmware licences, they may just as well be physical sql servers straight off. No problem with storing a 1TB db on the SAN at all, better in fact i would think.

    No DR or other advantages with using VM in this case ?

  • I wouldn't say he lied to you...but that doesn't mean you have the right picture either. My point was, the fabric architecture does not define how the data is stored, the software of the solution does. With Equallogic, I am pretty sure that you can specify the spindles (disks) that you want the data to reside on. The question becomes however, do you want to?

    The vendor will most likely try to tell you that you need to/should let the storage system decide where to move it all. This is especially true of those sales engineers that have limited understanding of anything not made by their company. Which means most, in my experience. 😉

    Keep in mind though, sharing spindles doesn't have to be bad. If you know the work load of the other devices sharing the spindles, you can gain from sharing. Which would you rather have, 10 dedicated spindles for your 1tb db or 90 shared spindles? Depends on how much load the other members that are sharing the spindles will add.

  • homebrew01 (1/10/2009)


    Perry Whittle (1/10/2009)


    homebrew01 (1/10/2009)


    Our plan is to take our 2 heaviest hit SQL servers and put them onto their own physical machines running VM. They will have the VM overhead, but not sharing with any other servers. We like the VMotion DR type aspects.

    if you're putting each VM onto a dedicated box of their own its pointless spending the $$$$$$$ on the vmware licences, they may just as well be physical sql servers straight off. No problem with storing a 1TB db on the SAN at all, better in fact i would think.

    No DR or other advantages with using VM in this case ?

    possibly but for the cost of the licences you could possibly find other ways. The general rule of thumb is if you get to the point where you dedicate a whole host to a VM it may as well be a physical machine

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Jonathan Kehayias (1/8/2009)


    SQLBOT (1/8/2009)


    I think it depends on your rationale.

    Normally I go for a single host\instance for the performance. The reason we use virtualization is for vended applications that require any sort of sa access. SA has OS access... I don't want somebody taking down a server with multiple instances on it using xp_cmdshell, or while performing a poorly written upgrade script that fills up the OS drive...

    Virtualization creates a safe sandbox environment for sa, but does add the overhead of providing a memory and disk space for the OS.

    Further, xp_cmdshell is disabled by default in SQL 2005, and very few applications being built today require it to be enabled in SQL Server.

    SA can enable xp_cmdshell, so I don't get your point here.

    I like what you're saying and it's worth noting that I think all SA access for vendors should be kept in check... sometimes it can't be. For me personally it's never worth the risk to multi-instance a machine if one of the instances has a user with SA.... ever.

    My line of thinking is that if they require SA, I'm not going to bend over backward, but rather cut a new VM, wash my hands and let the infosec and OS teams monitor them like they do all other production systems.

    ~BOT

  • Perry Whittle (1/11/2009)


    homebrew01 (1/10/2009)


    Perry Whittle (1/10/2009)


    homebrew01 (1/10/2009)


    Our plan is to take our 2 heaviest hit SQL servers and put them onto their own physical machines running VM. They will have the VM overhead, but not sharing with any other servers. We like the VMotion DR type aspects.

    if you're putting each VM onto a dedicated box of their own its pointless spending the $$$$$$$ on the vmware licences, they may just as well be physical sql servers straight off. No problem with storing a 1TB db on the SAN at all, better in fact i would think.

    No DR or other advantages with using VM in this case ?

    possibly but for the cost of the licences you could possibly find other ways. The general rule of thumb is if you get to the point where you dedicate a whole host to a VM it may as well be a physical machine

    I absolutely agree. You would probably be better served clustering the two servers (HA) using boot from SAN technology and SAN mirroring (DR), and putting the additional cost of VM licensing into more RAM / stronger hardware and segmenting the databases into different instances on the cluster. You can't address the same amount of memory or processors in a VM that you can physically. This isn't a practical use of putting SQL in a VM, from two people who run SQL in VM's standpoints.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • SQLBOT (1/11/2009)


    SA can enable xp_cmdshell, so I don't get your point here.

    I like what you're saying and it's worth noting that I think all SA access for vendors should be kept in check... sometimes it can't be. For me personally it's never worth the risk to multi-instance a machine if one of the instances has a user with SA.... ever.

    My line of thinking is that if they require SA, I'm not going to bend over backward, but rather cut a new VM, wash my hands and let the infosec and OS teams monitor them like they do all other production systems.

    ~BOT

    If that is an acceptable trade off for you and your company, and it works for you, then it is what it is. In SQL 2008, you can set a policy on the server that prevents enabling xp_cmdshell. Even if they do enable xp_cmdshell, if you separate Service accounts, they only get access to the level of priviledge of the service account, so the risk is minimum if you configure your service accounts properly.

    For my company, there is no way that $5K per CPU license and the extra resources for a SQL Instance for one database does not make sense, especially in the tougher economic times we face, and I don't work for a small company. Any place that I can save/cut costs, is well worth it, especially when it requires less than a few hours of work forcing a software vendor to do what they should have done responsibly in the first place.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Perry Whittle (1/11/2009)


    homebrew01 (1/10/2009)


    Perry Whittle (1/10/2009)


    homebrew01 (1/10/2009)


    Our plan is to take our 2 heaviest hit SQL servers and put them onto their own physical machines running VM. They will have the VM overhead, but not sharing with any other servers. We like the VMotion DR type aspects.

    if you're putting each VM onto a dedicated box of their own its pointless spending the $$$$$$$ on the vmware licences, they may just as well be physical sql servers straight off. No problem with storing a 1TB db on the SAN at all, better in fact i would think.

    No DR or other advantages with using VM in this case ?

    possibly but for the cost of the licences you could possibly find other ways. The general rule of thumb is if you get to the point where you dedicate a whole host to a VM it may as well be a physical machine

    One other thing to test for your larger database is the effect of parallelism waits CXPACKET on performance. My experience with VM's is that parallel processing is a performance killer rather than a performance enhancer. The reason for this is that the VM host software is configured to manage context switching and the multiple threads are no longer going against multiple processors necessarily. Generally speaking, I see much better performance from my VM's be setting the Max Degree of Parallelism to 1. If your 1TB+ database currently benefits from parallel processing, you definately want to test what the performance will be like under a VM.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Jonathan Kehayias (1/11/2009)


    One other thing to test for your larger database is the effect of parallelism waits CXPACKET on performance. My experience with VM's is that parallel processing is a performance killer rather than a performance enhancer. The reason for this is that the VM host software is configured to manage context switching and the multiple threads are no longer going against multiple processors necessarily. Generally speaking, I see much better performance from my VM's be setting the Max Degree of Parallelism to 1. If your 1TB+ database currently benefits from parallel processing, you definitely want to test what the performance will be like under a VM.

    I absolutely agree with this. Our Windows administrators thought the VM DR advantages were worth VM'ing one physical machine as one large VM machine. This was a server with several 100+ GB data marts and lots of parallel query plans. Performance was 25% to 400% slower in the VM world compared to the pure physical. Obviously your milage may vary, but bottom line is you should really test VM versus physical before you decide.

    Having said that, SQL Server on VM works great for us in other areas. Generally these are smaller home grown or third party apps. Like one of the earlier posters, we find that many 3rd party apps assume that they are the only user of the DB server and expect sa user and xp_cmdshell. I'd like to say don't buy it if the vendor doesn't know how to code it responsibly, but I don't get to make all of those decisions, so we just VM it.

  • its worth noting if you licence the cpus for Enterprise edition of SQL server at the host level, it will cover you for unlimited sql server cpu licences on any VM's on that host

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • John Beggs (1/11/2009)Keep in mind though, sharing spindles doesn't have to be bad. If you know the work load of the other devices sharing the spindles, you can gain from sharing. Which would you rather have, 10 dedicated spindles for your 1tb db or 90 shared spindles? Depends on how much load the other members that are sharing the spindles will add.

    And it depends on the access pattern of your application. If your pattern is a SQL-based data warehouse, where you see very large sequential transfers, the 10 spindles as direct-attach storage may well beat the 90 spindles on a SAN. (It can be very, very expensive or even impossible to match the bandwidth of the direct-attach storage in the SAN solution.) I didn't believe this until I actually saw it! Of course, if your application is a transaction processing app with lots of small disk accesses, the SAN will handily beat the direct-attach....


    The End.

  • Sir Slicendice (1/12/2009)


    John Beggs (1/11/2009)Keep in mind though, sharing spindles doesn't have to be bad. If you know the work load of the other devices sharing the spindles, you can gain from sharing. Which would you rather have, 10 dedicated spindles for your 1tb db or 90 shared spindles? Depends on how much load the other members that are sharing the spindles will add.

    And it depends on the access pattern of your application. If your pattern is a SQL-based data warehouse, where you see very large sequential transfers, the 10 spindles as direct-attach storage may well beat the 90 spindles on a SAN. (It can be very, very expensive or even impossible to match the bandwidth of the direct-attach storage in the SAN solution.) I didn't believe this until I actually saw it! Of course, if your application is a transaction processing app with lots of small disk accesses, the SAN will handily beat the direct-attach....

    Yes, I think I said in one of my earlier posts (unless it didn't make it into the "final" draft...) that comparable DAS will always beat SAN storage. Though I am not sure the 10-90 example is comparable... 😉

  • see what a hornets nest this stirs up. I'm pretty sure iscsi sans work on 1gb ethernet, that's going to be one hell of a bottleneck. all luns across all disks eh? Yeah the one I can't name is one of those - so far all I have is inconsistant test results and I broke one san due to a lack of backplane bandwidth. 300 spindles was mentioned to me but I don't have the full spec sadly. Put it like this my dedicated luns on my current emc san at least give consistant results, all still outperformed by 4 x 10k sas disks in a raid 10.

    vms I've used for dev and test systems where performance isn't critical - I have no issues with that at all. As to parallelism and vms - I think that has to be absolute vendor bull**** and as I have worked almost exclusively with multi proc boxes since sql 6.5 I can't find any truth with that at all. But I think we should leave that aspect out of this. Great thread people keep it up - fascinating!

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • colin Leversuch-Roberts (1/12/2009)


    see what a hornets nest this stirs up.

    you're not wrong there sir. A great thread going on here!

    who will win the speed war copper or fibre optic?

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • a long thread here but I don't think this has been asked\answered -

    the article mentioned that VM streamlines the DR process, but for that to be fully true the SAN would have to be mirrored as well, yes?

    for what its worth we have hit performance problems on VMWARE, this was with a fairly small (20GB ) but heavily used database. I was not involved in the design of the solution (sigh) but I understand it is an up to date version of vmware but all vms on the box were sharing the same LUN, so the problem was likely the SAN set up?

    ---------------------------------------------------------------------

Viewing 15 posts - 46 through 60 (of 68 total)

You must be logged in to reply to this topic. Login to reply