The Downside of Virtualization

  • It seems that setting up a virtual server is the hot new thing these days. More and more articles and press are being given to virtual servers. Here's an interesting one on a virtual application server using Weblogic. The idea is interesting and I think it would be great if we could somehow get our database servers to do this.

    I think there's another kind of virtualization that I'd like to see and honestly if I'm missing the boat or these already exist, let me know. I haven't used VMWare in a few verisons and I've never tried Virtual Server, though I am interested in giving it a go.

    One company I worked at years ago bought a large P-series from IBM. This thing had like 32 CPUs, 64 GB of RAM and no disks. Actually it had disks, but they were on the SAN. So anyways, we only purchase like 24 or 30 CPUs, so the rest were disabled. Same with the RAM. We had bought some like 40GB or so and the rest was disabled. We could call IBM and get a "key" to enable the rest of the hardware, either permanently with a purchase, or temporarily with a rental. They'd love for us to rent it indefinitely since I think the payback for a purchase was 90 days, and I'm sure there are companies that "rent" CPUs for a longer time, much to the delight of the IBM sales guys.

    This thing also had hot-add memory and CPUs, which meant, literally no downtime for the overall box, although any particular partition would drop if it's CPU went belly up. That was another cool thing, partitions. We could setup through firmware, a particular "partition" to have 2 CPUs, 4GB of RAM, and it would run as a machine essentially separate from the other "partitions" running on the box. It was cool virtualization and with hot-add support for memory in the version of AIX we were running, it made for the ability to "shrink" the QA partition and grow the production partition if need be.

    That's what I'd like to see from Dell and Microsoft. Windows with hot-add CPU and memory capabilities (don't forget subtract) along with hardware support. And even with more hardware and the ability to turn it on. I know this probably only makes sense for 8, 16, or larger CPU boxes, but it would be cool. And make the ability to partition off Windows and grow and shrink the partitions in real time.

    Course, it does make one's job harder. You should have heard the arguments between the DBAs and app server guys about moving resources. They were truly memorable.

    Steve Jones

  • Really don't get it... at the end of the day they would be different OS working on the same box but with the same maintenance or more...

    The only reason I see for having a machine with virtual servers inside would be in Development environments for testing server comunication or having the 1001 scenarios running without having the office cramped with servers.

    At the end of the day, has anyone done a benchmark between a single server X processor, x memory, x resources against the resourcewise matched virtual server?

    Speaking from the ignorance, of course...

  • Mmmmmmm... The keyboard would be something to see! What with all those Ctrl,Alt,Del keys!!!!! 😀

    I suspect Linux might be first to do that, though.

  • Luis, there are lots of reasons for and against virtualization.  And, I would agree, that virtualization may in fact introduce as much new overhead and maintenance time as it mitigates.  It really depends on the environment.  However, from a financial point of view, virtualization offers two very attractive benefits:

    1)  Most wintel boxes average 5% to 15% CPU utlization (probably closer to the 5%) over a long period of time.  This is a lot of wasted capacity and money spent on unused capacity.  Virtualizing servers allows for the consolidation of all of these workloads so CPU utilization can be increased to 85+%  This has actually been one of the primary IBM arguments for their midrange servers (iSeries and pSeries - as mentioned in the article).  Midrange servers have a larger up front cost, but provide great stability and economies of scale.

    2)  You can dyamically change the amount of resources available to a workload as your business requires.  In a dedicated server model, you may often have servers that average 5% CPU most of the month but average 100% CPU during other periods of the month.  With virtualization, this periodic need for resources is easier to accomodate and workloads can have ample resources available when they need it - and then have resources pulled away when the don't.

    Ryan

  • Steve, you seem to be describing a mainframe.

    Also, one of the biggest advantages to virtualizing your environment is the fact that you usually end up with greater control to reproduce the application (server OS setup, and configuration included) for proper testing, and recovery processes.

    Sounds like interesting hardware, but I'd like to see this become more mainstream. Perhaps with Xen, and the new Conroe processors it will get a bit closer.

    John

  • I think that it is close to a mainframe environment as the virtualization software improves. The IBM pSeries definitely includes some evolutionary mainframe ideas adapted to a smaller platform.

    One of the downsides right now with this on commodity servers is that the host needs patching and maintenance, which can be an impact to more than one server. Also, maybe Virtual Server fixes this, but a few years back we had issues with auto starting the virtual sessions. Definitely not something you want to worry have happen in a production environment.

  • a,

    I'm no expert on mainframes, but I believe mainframes are traditionally considered to be very large scale servers exclusively accessed by dumb terminals.  I think Steve's example speaks more to midrange servers - which can partitiion out chunks of resources to multiple OS types (Windows, Unix, Linux, etc.), but completely subscribe to the traditional client/sever model and work in concert with "smart" clients.

    However, the line between mainframe and midrange is beginning to blur.

    I think Steve provided a great example here.  Servers such as the one he described were the pioneers of workload consolidation.  And, these partitioned boxes performed without the overhead that early versions of today's virtualization software have suffered from.  One new benefit to virtualization is the entry cost.  We can now consolidate workloads on servers far less extravagant than the partitioned midrange servers.

  • I've used VMWares early ESX products and unfortunately encountered one of their features, other than that the experience was relatively good. The box ran SQL server, Lotus Notes and File and Print services in 3 seperate VM's - nothing to stretch it really.

    I've heard from a retailer that the latest version allows you to have several computers accessing SAN storage. Gives you the ability to move entire VM's live over to different hardware, with no interuption - thats one amazing feature. Not quite hot swappable CPU's and RAM but close enough for most.

  • Steve,

    It sounds to me like you would prefer to go with a "blade" server set up rather than a virtual environment.  If you need a new server you simply slap in another blade. 

    I believe from a cost perspective you will quickly find that virtual servers quickly become much more cost effective compared to blades.  We are currently implementing a number of virtual servers at my company and I think it will be a great improvement.  The production SQL cluster will still have its own physical hardware but the test and development instances will be on virtual machines.

    Eric

  • In addition to better utilization of a server, virtualization also offers the ability for failover above that of clustering.  You can failover a virtual server with very little to no noticeable effect.  We've tested VMWare's failover by watching a streaming video then failed the virtual server to another server and there was no noticeable delay in the video.  Without actually looking at the management console you would have never known we just switched servers.

     

    If you need to upgrade a server, you can simply switch the virtual servers on server A to server B/C/D etc.  Then you can bring down server A and make any changes needed.  Once done and verified everything is ok, simply move the virtual servers back to server A.  The end users would never know what you've done.  No more late nights and weekends just to work on a server.

     

    Also, if you need another server rather then waiting days to order the hardware, install and configure you can create a new virtual server in minutes.  We've successfully tested up to 20 virtual servers on a single server but you can even go higher.  With the same operating system on each virtual server, you can utilize shared memory thus making memory usage more efficient as well.  So from a business point of view it seems to make a lot of sense to virtualize your servers... better utilization of hardware, cheaper, quicker to implement and better failover.

     

    I'm no expert, there are others individuals in our work place that actually set up and run the virtual servers, but seems like the gains have far out weighed any extra maintenance.

     

    David

  • Ok, I get your point.... we have loads of servers being iddle most of the time, but that fall sort when we need full power from them...

    If the processor load balancing is really really smart, then I would go for it. One feature that I would like to find is to

    have four virtual servers that can use as much power from the server as long as the rest don't need it. If everyone starts going busy, then limit to a 25% processor load a head. That would require a very fast throtling... if this new generation of VM can do this well then I am in.

    The only thing remaining to solve, in the logistic side, is how to cope with the downtime for maintenance, being so hotswap it would be minimun, but you know it will happen...

    Something to keep an eye on.

  • Hi All

    Let me share a little piece of information with you on some work that I did in the mid 80's. It was using the DEC Alpha Equipment with a special version of OpenVMS. What it allowed you to do was run NT4 Servers (yes Plural) as 64bit virtual machines. Each machine could then take advantage of the native clustering facilities available on the DEC including wide area clusters.

    Don't ask me how it was done, I just went to the Demo and was very impressed.

    just my 2 pennyworth

    Paul

     

     

  • Hey Paul, time must really fly for you. I think perhaps you are at least 10 years early in your estimation of when this took place.

    NT anything wasn't even released until in the 90's, much less NT4.

  • Having hot-swapable equipment does not seem very cost effective for most environments. It sure would be nice for certain applications where downtime costs exceed hardware costs though.

    I haven't used VMWare for about 1 1/2 years so I don't know if it has been improved. My impression of it overall was very positive. However, even a powerful development box could be quickly overwhelmed if only two OS's were running concurrently and they both were running resource intensive applications. I also encountered some issues assigning hardware keys to a particular OS.

    Virtual environments don't paint a true picture of how an application will perform. They are useful for some development tasks, basic unit testing and QA. For stress testing and validating field operation, a real machine is crucial.


    Karen Gayda
    MCP, MCSD, MCDBA

    gaydaware.com

  • For most systems, a well-written application can be made to run entirely in a virtual environment. Not everyone needs ten kabillion transactions per second on 30 googlebyte storage systems. Often supporting 10k user sessions is sufficient which is more than doable using web-based systems with database backends.

    Also with the layer of the virtualization system becoming thinner and thinner, you will see much better performance. Take a look at Xen for example.

    It's taking advantage of special instructions in Intel or AMD processors which allow you to even run unmodified MS Windows (albeit with special drivers) in an emulated environment where the CPU speed is about 95% native. This will grow eventually into virtualization for the graphics, and disk subsystems as well.

    Another huge advantage is this. Imagine if you run out of resources and decide to simply buy a fancier computer. You just buy the computer, get it up and running, then install your virtualization system, and copy your file over, and run your old setup. Instantly faster. Oh wait, doesn't work? Just stop it, and run it on the old one again.

    John

Viewing 15 posts - 1 through 15 (of 20 total)

You must be logged in to reply to this topic. Login to reply