• Tom.Thomson (6/16/2009)


    Suggesting that "Simple servers, such as DNS, DHCP, or Active Directory Servers" are going to be easy for virtualisation seems to me a bit strange. Anyone who has to live with the error rate inherent in having DHCP dynamically updating DNS with zones held in Active Directory will probably have written a "DNS-DHCP reconciliation tool" to fix the numerous discrepancies and concluded that this service collection is far from simple and maintenance free. Also, in many environments these are mission-critical services and putting the out in a cloud somewhere where you don't have instant control is maybe not such a good idea.

    who said anything about cloud?

    Lets be very clear here: Virtualization Cloud

    I'm referring to running multiple virtual machines (VM's) on a virtualization platform such as Hyper-V or VMware.. something that works particularly well for servers that sit around idling at like .1% cpu capacity for 98% of their lives.

    The reference to 'simple' was to the physical requirements (cpu, memory, disk, etc) of the systems, NOT with reference to configuration. If a service is tricky to configure, that's not going to change, it stays tricky. Using a VM can't really eliminate that pain. OTOH even in this regard there are some benefits, such as making a copy of a properly configured system becomes a cinch. copy a few files (or 'export' the machine) on the host system and you're done. Also if you want to tinker with settings for any reason, Saving a 'base' state as a 'snapshot' in Hyper-V takes seconds, It literally takes me longer to figure out and type in the name for the saved state than it does to create the snapshot. Then I'm free to try some alternate settings, and if it didn't work, or made things worse, I can revert back to the snapshot in a matter of seconds. If things work better I delete the snapshot and move on.

    (note for something like DNS/DHCP etc you might want to snapshot the system in a shutdown state, since if it's running, it is restored to a perfect copy of the current state when you took the snapshot with the exception of a very few things like the realtime clock. which might not be such a good thing if the snapshot is from a day or two ago, and you'd be restoring a bunch of obsolete in-memory data on leases etc if you reverted back to the 'as running' state.

    DHCP needs to be very fast and relkiable in some environments - if it's off at the far end of a slowish pipe it won't be useful, nor weill it if the pipe is unreliable. Same for Active Directory - particularly kerberos (user validation/login) and applying group policy.

    agreed, the 'cloud' is not the place for this kind of system, or anything else where latency is an issue. but as we said before Virtualization Cloud

    For the majority of people, a decision to virtualise the DHCP service is clearly a manifestation of CTD in a non-database sphere! How much server power will you save? How much network cost will you incur? I don't think there's any saving to be made, in fact I'm pretty sure that it would be a pointless and expensive exercise.

    CTD? ok you've stumped me, Circling the Drain? Close to Death? Cheaper Than Dirt? what's your meaning here? I hope I don't have to take points off my geek score for not knowing this.

    Seriously, again I think you are confused with cloud vs virtualization. but here's a real world example for you. I run some 20+ (and growing as the need for new configs arise) testbed systems (win2K, XP, 2003, Vista, 2008, 32 and 64 with or without SQL 2000 (MSDE) SQL 2005, SQL2008, IIS of various vintages), all off a single dual quad-core 2U high rackmount server. So what's my power savings vs having 20+ physical systems (or a smaller number and constantly having to juggle what's installed where and tons of disk images) Especially if we are talking older power hungry systems like P3 and P4 systems that didn't have speedstep type tech to lower the power when idle? Oh and the power for monitors, kvms, etc also. vs the one server, and little vista desktop that I used to access console sessions on the vm's (I manage them and the host remotely, server room is 300' from my desk but I have set foot inside in nearly 3 months time)

    The real barrier isn't porting ones own proprietary applications into the cloud - it's coping with poorly written software provided by third parties. Getting one's own application structure cleaner and more hardware independent is something that's good to do anyway (it gives us some extra flexibility and future-proofing) so the need for that is not the real barrier.

    Agreed. OTOH once you have a system setup as a VM, it IS running on what it considers to be very generic hardware. So cloning the system, or moving it from one physical host to another, becomes childs play. Yes that does get complicated somewhat by big disk arrays, SAN's etc which is one reason to start with the simpler (hardware wise) systems, get experience there, and then evaluate if it makes sense for your more sophisticated (hardware wise) systems.