Barriers to Virtualization

  • Comments posted to this topic are about the item Barriers to Virtualization

  • When you say "commit crimes" I presume you mean take risks. A "good IT project manager" isn't going to commit crimes... we take risks every day.

    The director of infrastructure where I work asks the question "what have you saved me this week?".

    It goes deeper than the weekly question. Every employee has annual written goals, reviewed and revised bi-annually, that include an acknowledgment of commitment to saving money. Our annual bonuses are based on this along with other factors. It's a culture thing, and everyone is involved.

    Saving money is paramount; risk taking minimized, and crimes, well, not committed.

    During 14 years in the Pentagon, it was often noted that we spend more time in front of the vending machine buying a 50 cent candy bar than we do when spending hundreds, if not hundreds of thousands, of dollars – taxpayer’s money.

    Perhaps that's a crime in itself?

    Thank-you,
    David Russell
    Any Cloud, Any Database, Oracle since 1982

  • I think many of the things you noted are good arguments FOR going virtual. It provides a layer of hardware abstraction, so with a virtual machine, moving it from one physical piece of hardware to another is most often a heck of a lot easier and faster. The same goes for making a compete 'copy' of a system if needed for testing, a staging environment, etc It might make the initial transition a little more difficult, but once you are running VM's you've got a lot more freedom to do things like take snapshots prior to deploying updates (and rollback in a matter of seconds if things go south) or move the vm from one physical system to another.

    Admittedly this is a bit trickier with Vm's where performance is paramount that are likely to use direct access to their data drives instead of using virtual hard disks. In that case you might need to move the physical drives or make an image of them, instead of just copying a single file or directory from one system to another. But it's still a lot easier to deal with moving the VM from one system to another due to the more generic hardware that the virtualization system emulates.

    Not to mention that relatively simple tweaks such as allocating another processor, or more memory, can literally be done in seconds of time. It will take longer to shutdown and startup the system than it will to change the VM settings to allocate more memory or another CPU. Other changes like allocating another network card, or creating a private vlan within the host server can be done on the fly.

  • The one factor that I seem to run into when considering the move to virtualization is that it will cost more in things like storage area networks and high bandwith switches than I would gain in reduced server costs. I just haven't seen where it will benefit us.

  • I'm kind of with Grasshopper on this. We have a few dozen servers, all with direct attached storage. To go virtual, and really take advantage of high availability and disaster recovery, we would require some kind of network storage. The recommendation also seems to be to go with servers that have virtualization built into them, which means our existing servers will be retired. All of this means we would need to invest significant dollars in hardware.

    The argument of going green falls on deaf ears around here. As a manufacturing facility we have machines that use enormous amount of electricity which dwarfs what we consume in IT. I realize every little bit adds up, but the argument just doesn't get very far with top management.

    All of this being said, we will eventually virtualize our server environment. We will start with some of the easy ones like DNS, DHCP and file/print. Our core ERP system will probably be the last one to be virtualized, and it is the one that would benefit the most in terms of high availability that virtualization would offer.

    I'd be interested to here from folks who have virtualized some of their large, core systems.

  • dld (6/15/2009)


    The recommendation also seems to be to go with servers that have virtualization built into them, which means our existing servers will be retired. All of this means we would need to invest significant dollars in hardware.

    More than a recommendation, this is required by most serious (e.g. secure, stable, high performance etc) virtualization platforms. I know it's a requirement for Hyper-V.

    Then again, most 'server' class cpu's made in the last few years have the hardware virtualization stuff built in. So yes, you'd not be able to retask truely "OLD" hardware for this, but anything fairly recent ought to work.

    Power savings might not be enough to make a big dent on the power bill for the entire company, but it could offset a fair portion of the cost for a new server, especially when you consider the multiplication effect of not needing as much AC for the server room, (or pushing what you have closer to it's limits) UPS capacity etc.

    I think it's something that makes a lot more sense to sort of slide into as you are looking at replacing hardware, retiring older boxen etc.

    Another potential benefit is replacing older legacy servers that you know are on the far side of the MTBF curve, and are existing on borrowed time. Unless you have a need to update the OS or other software for security reasons, it's not too hard to migrate a system like that onto a VM, one that doesn't need a lot of resources from its host.. literally moving the system (using utilities that make it easy) or just doing a backup and then restoring onto a basic VM of the same OS the old system is using).

    sorry, I'm up on my Virtualization soapbox. I'll get down now. (just be glad I didn't get into using VM's for test environments.. they really rock for that)

  • SQAPro (6/15/2009)


    dld (6/15/2009)


    Another potential benefit is replacing older legacy servers that you know are on the far side of the MTBF curve, and are existing on borrowed time.

    Thanks for the input. We do have a few older workhorse Sun servers, one of which used to run an Oracle DB, but now sits idle most of the time - running a few perl scripts from cron. These will most likely be our first VM targets.

  • I don't think I'd advocate trashing existing systems and moving to VMs for many people at all. Instead I'd start learning about it, and perhaps shifting some systems to VMs for the practice, and then considering virtualization for new installations.

    I rarely see fewer servers in companies, but with virtualization, you can slow the growth from new physical boxes. And as you get practice, you might move some of those older systems, as their hardware becomes a problem or you want to get rid of it (say for new server space).

  • Suggesting that "Simple servers, such as DNS, DHCP, or Active Directory Servers" are going to be easy for virtualisation seems to me a bit strange. Anyone who has to live with the error rate inherent in having DHCP dynamically updating DNS with zones held in Active Directory will probably have written a "DNS-DHCP reconciliation tool" to fix the numerous discrepancies and concluded that this service collection is far from simple and maintenance free. Also, in many environments these are mission-critical services and putting the out in a cloud somewhere where you don't have instant control is maybe not such a good idea.

    DHCP needs to be very fast and relkiable in some environments - if it's off at the far end of a slowish pipe it won't be useful, nor weill it if the pipe is unreliable. Same for Active Directory - particularly kerberos (user validation/login) and applying group policy (if I never again see event 1030 in a Windows system event log caused by network issues I'll be a happen man).

    For the majority of people, a decision to virtualise the DHCP service is clearly a manifestation of CTD in a non-database sphere! How much server power will you save? How much network cost will you incur? I don't think there's any saving to be made, in fact I'm pretty sure that it would be a pointless and expensive exercise.

    The real barrier isn't porting ones own proprietary applications into the cloud - it's coping with poorly written software provided by third parties. Getting one's own application structure cleaner and more hardware independent is something that's good to do anyway (it gives us some extra flexibility and future-proofing) so the need for that is not the real barrier.

    Tom

  • DNS and DHCP servers do need to respond quickly, but they have low resource requirements, and I've seen many people virtualize these easily.

    Going to a VM doesn't mean your server is off in some cloud. It's often on a box in your same data center, same connectivity. It's just not occupying the whole box.

    File and print servers are good places to start learning, they can tolerate some delays, and you'll get an idea of complaints or comments from users.

  • Our file and print server will probably be our first production VM.

    Being in a manufacturing environment means I have to be very careful about anything that will interrupt the plant process. Down time in the plant means many $$ per minute. Having said that, the IP connected equipment in our plant all has fixed IP addresses. That doesn't mean DHCP is not important, but it would not bring us totally down.

    I would also second Steve's observation, that reliability and speed does not need to be compromised in a virtual environment. Done right, with appropriate hardware, virtualization can rival a dedicated hardware server.

    As far as the "Cloud" goes, putting data into a public cloud environment may have its place, but not for core operating systems.

  • Tom.Thomson (6/16/2009)


    Suggesting that "Simple servers, such as DNS, DHCP, or Active Directory Servers" are going to be easy for virtualisation seems to me a bit strange. Anyone who has to live with the error rate inherent in having DHCP dynamically updating DNS with zones held in Active Directory will probably have written a "DNS-DHCP reconciliation tool" to fix the numerous discrepancies and concluded that this service collection is far from simple and maintenance free. Also, in many environments these are mission-critical services and putting the out in a cloud somewhere where you don't have instant control is maybe not such a good idea.

    who said anything about cloud?

    Lets be very clear here: Virtualization Cloud

    I'm referring to running multiple virtual machines (VM's) on a virtualization platform such as Hyper-V or VMware.. something that works particularly well for servers that sit around idling at like .1% cpu capacity for 98% of their lives.

    The reference to 'simple' was to the physical requirements (cpu, memory, disk, etc) of the systems, NOT with reference to configuration. If a service is tricky to configure, that's not going to change, it stays tricky. Using a VM can't really eliminate that pain. OTOH even in this regard there are some benefits, such as making a copy of a properly configured system becomes a cinch. copy a few files (or 'export' the machine) on the host system and you're done. Also if you want to tinker with settings for any reason, Saving a 'base' state as a 'snapshot' in Hyper-V takes seconds, It literally takes me longer to figure out and type in the name for the saved state than it does to create the snapshot. Then I'm free to try some alternate settings, and if it didn't work, or made things worse, I can revert back to the snapshot in a matter of seconds. If things work better I delete the snapshot and move on.

    (note for something like DNS/DHCP etc you might want to snapshot the system in a shutdown state, since if it's running, it is restored to a perfect copy of the current state when you took the snapshot with the exception of a very few things like the realtime clock. which might not be such a good thing if the snapshot is from a day or two ago, and you'd be restoring a bunch of obsolete in-memory data on leases etc if you reverted back to the 'as running' state.

    DHCP needs to be very fast and relkiable in some environments - if it's off at the far end of a slowish pipe it won't be useful, nor weill it if the pipe is unreliable. Same for Active Directory - particularly kerberos (user validation/login) and applying group policy.

    agreed, the 'cloud' is not the place for this kind of system, or anything else where latency is an issue. but as we said before Virtualization Cloud

    For the majority of people, a decision to virtualise the DHCP service is clearly a manifestation of CTD in a non-database sphere! How much server power will you save? How much network cost will you incur? I don't think there's any saving to be made, in fact I'm pretty sure that it would be a pointless and expensive exercise.

    CTD? ok you've stumped me, Circling the Drain? Close to Death? Cheaper Than Dirt? what's your meaning here? I hope I don't have to take points off my geek score for not knowing this.

    Seriously, again I think you are confused with cloud vs virtualization. but here's a real world example for you. I run some 20+ (and growing as the need for new configs arise) testbed systems (win2K, XP, 2003, Vista, 2008, 32 and 64 with or without SQL 2000 (MSDE) SQL 2005, SQL2008, IIS of various vintages), all off a single dual quad-core 2U high rackmount server. So what's my power savings vs having 20+ physical systems (or a smaller number and constantly having to juggle what's installed where and tons of disk images) Especially if we are talking older power hungry systems like P3 and P4 systems that didn't have speedstep type tech to lower the power when idle? Oh and the power for monitors, kvms, etc also. vs the one server, and little vista desktop that I used to access console sessions on the vm's (I manage them and the host remotely, server room is 300' from my desk but I have set foot inside in nearly 3 months time)

    The real barrier isn't porting ones own proprietary applications into the cloud - it's coping with poorly written software provided by third parties. Getting one's own application structure cleaner and more hardware independent is something that's good to do anyway (it gives us some extra flexibility and future-proofing) so the need for that is not the real barrier.

    Agreed. OTOH once you have a system setup as a VM, it IS running on what it considers to be very generic hardware. So cloning the system, or moving it from one physical host to another, becomes childs play. Yes that does get complicated somewhat by big disk arrays, SAN's etc which is one reason to start with the simpler (hardware wise) systems, get experience there, and then evaluate if it makes sense for your more sophisticated (hardware wise) systems.

  • SQAPro (6/16/2009)


    Tom.Thomson (6/16/2009)


    CTD? ok you've stumped me, Circling the Drain? Close to Death? Cheaper Than Dirt? what's your meaning here? I hope I don't have to take points off my geek score for not knowing this.

    Sorry, I should have written it in full: CTD = Compulsive Tuning Disorder - - the horrible syndrome suffered by too many DBAs and SysAdmins and developers (and System Architects) that makes them too busy burnishing the pine needles to notice that there's a forest-wide problem. The term has been around in database circles for 8 or 9 years I think, coined by Vaidyanatha who wrote books on Oracle so maybe better known in Oracle circles than in SQL Server circles.

    Tom

  • Steve Jones - Editor (6/16/2009)


    DNS and DHCP servers do need to respond quickly, but they have low resource requirements, and I've seen many people virtualize these easily.

    Going to a VM doesn't mean your server is off in some cloud. It's often on a box in your same data center, same connectivity. It's just not occupying the whole box.

    But what is the point of virtualizing these things? It doesn't save any hardware or software or configuration or backup or recovery planning or anything else that I can see.

    Already in a typical small sized installation of our system the AD, DNS and DHCP services will be running on the same physical server as the active databases, websites, and application services. Assignment of discs (or actually arrays) to various functions is no more complex that assignment of them to VMs, so there's no saving there. The only thing that virtualization of these three servicers might help with is that maybe I could have the databases in a VM that doesn't contain AD, and not have to worry about NTBackup screwing up the chains of DB backups - but that would require me to refrain from taking system backups of the virtual machine handling the databases, so no thank you very much, I'll live with having to properly mesh the backup schedules and having only short chains!

    As the systems get bigger and more heavily loaded, there will be extra servers. All of them will run all the software, all the services except that DHCP will run only on one of them at any time. Virtualization doesn.'t appear to help there either.

    I have serious uses for virtualization, but not for this sort of stuff. In development of embedded systems (clients, not servers) I really want to have a nice big engine that does builds and interfaces to the source control service and the release service and so on, but (a) I don't want to have to move to a different machine for testing and (b) I have to run the hardware discovery build on the target machine not the hefty build machine; so having a build VM and a hw discovery and test VM (and maybe a separate debug VM) all on the same hardware is extremely useful. Maybe source control and release management and software configuration control for the embedded system will be in VMs on the same hardware too. Maybe teh server I test the embedded app against can be another VM on teh same hardware. Putting large scale services onto VMs in a production environment can make sense too, but surely hiving off things like DHCP, DNS and AD onto separate VMs is more trouble than it's worth?

    Tom

  • AD is distributed, living on multiple servers, so I'm not sure what you're getting at with the AD database.

    Lots of people do run DNS/AD DC (domain controllers), and DHCP on the same box, but there are a few of them. In one environment, we had 4 or 5 DNS servers (not dedicated) and over a dozen DHCP servers. Add in print servers and file servers, and we can easily be talking 2-3 dozen servers in a large environment.

    We use VMs? Part of that is being able to separate off services to separate Windows instances, perhaps for compatibility, perhaps because departments want their own server (political reasons).

    As you move to dedicated machines for certain functions, say an app to handle door security, maybe one for network config software, maybe one for filtering software, you can end up with a profilferation of machines that don't really need a full physical server, but require separation from other apps for some reason. Might be as simple as an ignorant vendor that won't provide support unless it's on it's own Windows instance.

    VMs make sense. Not everywhere, and not for all production systems, but they make sense.

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply