As I went walking last night, I listened to two podcasts from RunAs Radio of interviews of Michael Manos and Danielle & Nelson Ruest. Michael Manos is the senior director of Data Center Services at Microsoft and he was talking about some of the optimizations Microsoft had made to reduce the energy consumption of their data centers. Things like doing studies and trying to cool with just outside air, measuring transactions for the power consumed, and the like were things that came out of the discussion. Virtualization was a topic with Mr. Manos and it was the primary topic for Danielle & Nelson Ruest.
There's a lot of focus on reducing energy usage by data centers and this is a good thing, obviously. Going green is in everyone's long term best interests. But it really got me thinking from an infrastructure architecture perspective as to what could potentially be done. With companies like Intel experimenting with cooling technologies and strategies and other organizations like HP looking at power management solutions, I think we've started to address how to reduce energy with respect to cooling. However, that's really only scratching the service.
When I think of solutions like Citrix's Provisioning Server, Citrix's XenServer, and VMware's ESX Server combined with these types of ideas, It would be theoretically possible to "spin up" only the hardware needed for the current load. For instance, if provisioning server is used to deploy images to servers, physical or virtual, as more users come on to systems and the load is increased, physical servers are started up and immediately "provisioned" with images. These are streamed, meaning they come up in minutes (POST checks and the like compromising most of this time). Applications are virtualized, meaning it would be possible to deploy the types of apps or services in an on demand model. XenServer and ESX Server can be used to spin up virtual machines and move servers around in real-time across different physical hosts (I'm not forgetting about Hyper-V, but the lack of real time movement of the virtual machine impairs the vision).
Now let's go a step further. If there was a larger command and control system which understood the power and cooling systems, where the physical hardware was, how the various systems interacted how increased load was supposed to be handled, and that system had the ability to interface with all of these systems, you could even bring systems up and down in accordance with demand and distribute them across the data center to maximize the effectiveness of the cooling and power systems, which means you can run them at lower capacity and ramp them up on demand as well.
I agreed that this is massively complex. And it is certainly pie in the sky and there are likely limitations that would prevent achieving this kind of vision. However, it would be awesome to model. My undergraduate background includes mathematical modeling so that's the direction my mind spun towards. This likely would involve non-linear solutions, but with the computing power at our disposal today, I wonder if it would be possible for such a command and control application to feasibly run on today's server hardware. If it could be done and the limitations overcome, in larger environments there could be a significant cost savings.