Virtualization - yet again ...

  • I spent some time searching this forum and the interwebs and haven't found a lot of definitive insight/agreement on virtualization and best practices. So here we go with another VM thread.

    I'm seeing issues with virtualiztion that seem obvious on the surface, but maybe I'm naive and too old school.

    DB 101 says, in general, separate MDF and LDF files to separate disk drives. Does this still apply in a virtual environment? I have to think yes. Which means I have to have the sys admins that host our environment make sure that log and data files are separate on the physical hardware not just in the virtual layer. They resist divulging such things.

    Similar issue with CPUs. A virtual server really doesn't know what it's running on. It could be sharing those 16 cores with 4 other instances and not even know it. So it seems reasonable that the actual CPUs should be explicitly allocated to specific SQL instances. though I suppose this might be more for high volume instances that are expected to have constant high CPU utilization.

    I recently ran into a technique for dealing with tembDB contention that recommended placing the tempDB files across several disk depending on the number of CPUs since each CPU could handle a tempDB thread separately during I/O. Which means knowing the real CPU resources available, not the virtual, and the real configuration of the disk arrays.

    Memory allocation was well figures in. A virtual instance can be assigned a max amount of memory that it may never actually get if the physical host has already doled it out. Rebooting an instance would also free up memory on the physical hardware and another virtual instance might grab that, no? Which would leave the rebooted instance starved for memory even if it is "assigned" a specific amount through the virtualization.

    So, in general, it seems to me that if you are virtualizing SQL servers, you have to drill down through the virtual layer and make sure the physical resources are properly allocated to the instances.

    Clustering gets interesting if the nodes on a cluster happen to share some physical resource. If that resource fails you have the pleasure of multiple nodes on your cluster dropping off.

    This makes me wonder if virtualization is even worth the effort in some cases. The idea behind server consolidation is to make sure your hardware is fully utilized. If a single instance of SQL

    A high availaibility, high volume system seems to less a candidate for virtualization than say a number of low volume instances for say departmental databases that seem to proliferate in many organizations.

    Sorry for the rambling. I just don't see a lot of definitive answers to some pretty serious issues when it comes to business continuity, performance, etc when it comes to virtualizing servers. Seems like it has become "the thing to do" and not enough thought is given to the needs of the entire database environment. The best practice should be to examine the information system, then decide if virtualization meets the need or not - not simply plop your junk on a virtual environment because some white paper says it will save you money.

    Thoughts?


    Bob
    SuccessWare Software

  • Bueler?

    Already fell to page three, so forgive the bump.

    I'm thinking the lack of response is essentially confirmation of what I suspect - that there still is no good, solid consensus on best practices for virtualized SQL Servers.

    Either that or I smell funny and nobody wants to talk to me.;-)


    Bob
    SuccessWare Software

  • My co-worker and I were discussing this yesterday for deparmental low-volume databases and came to the same conclusion about the virtualization needing to align with the actual hardware of the server but since we couldn't find much online about it we decided to just try a couple of instances and see what the resulting metrics look like.

    Erin

  • I did find this:

    [/url]

    “If you are using virtualization, and if the hypervisor supports it, use physical disks (through some sort of pass-through feature) instead of virtual disks (such as the VHDs in Hyper-V that are used for storing the guest operating system). The performance increase can be significant. Whether the configuration is physical or virtual, it is always recommended not to share underlying spindles with other servers, applications, and workloads. As mentioned at the beginning of this section, that may not be possible, but it is still a best practice. The two main things to worry about from a performance standpoint are transaction logs and tempdb.”


    Bob
    SuccessWare Software

  • It looks like some of your first post dropped off.

    In terms of IO, you still need good IO. There's no magic solution with virtualization. Multiple disks and multiple IO pathways still make sense as appropriate.

    In terms of memory, the modern hypervisors allow some reservations that prevent other guests from using all memory. Be careful here, but this isn't as simple, or as hard, as you might think. It's a balancing act, just as any consolidation is.

    For tempdb, treat the guest as a normal set of cores. You can set different levels of cores for the VMs, and for SQL Servers, you are probably not overloading cores. If you have 8 cores on a host, I'm not sure I'd assign 40 cores to various VMs. I know people running lots of virtualization and while they might be dense for lots of servers (IIS, DCs, etc) at 10:1 VMs:host, they would reduce that down to something closer to 4:1 for SQL Servers, or even less.

  • Brent Ozar's videos/articles are a good start for studying VM suitability and configuration.

    We've got dozens of SQL servers on VMs and they are OK (more or less!).

    Of course I was picky with virtualisation and only considered quiet instances (less than ~10% avg CPU and not heavily transactional)

    The concept of 'disk' can vary for VMs. Are we talking LUNs? Even then a LUN can spread across 100 spindles, making nonsense out of old-school rules. I've found LUNs sharing just a handful of spindles, despite what the infrastructure guys claimed (measure throughput and IOPs yourself!)

    As for CPUs, there is a school of thought that you should allocate as few vCPU cores as possible, because then it is easier to schedule them under virtualisation.

    (example: if your VMs have 16 vCPUs assigned and the host only has 32CPUs, there is bound to be contention as it tries to find 16available cores every time. If you have only assigned 2 vCPUs, it will be much easier). I would experiment with realistic load and different CPU settings.

    My rule of thumb is: If performance is a serious consideration, consider physical first.

    I see virtualisation as a cost-optimisation method, trading off performance (throwing in some HA as a sweetener:w00t:)

    Cheers,

    JohnA

    MCM: SQL2008

  • 1) Forums are for TARGETED, small questions and assistance. Your post was neither. I think that is why people didn't answer.

    2) Yes, you are naive and old school. Virtualization is hear to stay, for many good reasons. I have seen 50+TB SQL Server environments completely virtualized. I have seen many thousands of transactions per second hitting a single virtualized SQL Server. It works FINE - when done right.

    3) Speaking of done right, if you want virtualization efforts to be successful (ESPECIALLY where SQL Server is concerned), PLEASE do yourself and your company a favor and get a professional on board to assist you in planning, implementing, migrating, setting up monitoring, etc.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru (3/7/2013)


    2) Yes, you are naive and old school. Virtualization is hear to stay, for many good reasons. I have seen 50+TB SQL Server environments completely virtualized. I have seen many thousands of transactions per second hitting a single virtualized SQL Server. It works FINE - when done right.

    Of course virtualization is hear to stay. Amazon Elastic Cloud has elevated it into an art form. I am more conservative than old school. Chasing a technology because everyone is doing it isn't a strategy, it's herd mentality

    A 50 TB high throughput virtualized server is completely possible when, as you say, it is done RIGHT.

    TheSQLGuru (3/7/2013)


    3) Speaking of done right, if you want virtualization efforts to be successful (ESPECIALLY where SQL Server is concerned), PLEASE do yourself and your company a favor and get a professional on board to assist you in planning, implementing, migrating, setting up monitoring, etc.

    A business case has to be made for this. The expense of bringing in a consultant has to be justified, especially for a project that is already late and over budget. I need to know enough to say "this doesn't seem right, we really need an expert" and be able to back up that statement with more than opinions.

    I am in a situation where virtualized servers are already rolled out in their production configuration with what appears to be little attention paid to the specifics of the particular applications. It appears that there is an expectation that "one-size-fits-all" will be just fine - provision a virtual SQL instance as a carbon copy of every other one. The 50 TB virtualization you spoke of was engineered from the very beginning handle those specific work loads.


    Bob
    SuccessWare Software

  • My thoughts, FWIIW...

    Any type of deployment needs a business case. For some cases a virtualised solution will be the cheapest and easiest to manage, while for others a physical solution will tick the boxes. There is no one-size-fits-all approach that will allow you to meet your SLAs at minimum cost. Part of the planning for any type of server deployment should be the storage requirements, and these must include IO rates as well as GB capacity. If someone has only done part of the planning, they should not be surprised if they get only part of the performance or cost savings they hoped for.

    Virtualisation using a provider such as Amazon AWS has given my employer's organisation much more than simply providing the required SQL Server service at a significant cost saving over the previous hosting solution, it has introduced a new level of business agility. We can now go from business case to deployed server in about 3 days, compared to about 3 months prior to the move - regardless of size of server, disk capacity or IO requirements. If agility is important to you, it needs to be part of your planning.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply