This editorial was originally published on Jan 18, 2016. It is being re-run as Steve is on holiday.
I suppose like most parents, and my parents before me, I spend a lot of time telling my kids to “turn out the lights”. What is it about kids and leaving lights, TVs, game consoles, and other devices perpetually running? I’ve tried explaining the cost to them, but I think until I find a way to make chargebacks against their allowances, the logical argument isn’t likely to work. It isn’t that they (or me, way back when) don’t care; they just are thinking about other things. I console myself that at least the LED lights I've installed mean I’m not spending as much as used to!
What does that have to do with IT and databases? We’ve been a lot like my kids – we run hardware without much if any thought to the cost. We disable the BIOS settings that might throttle the CPU down because we (think) we need max power all the time. We worry about maxing out the capacity, so we over provision everything and are close to gleeful when we see a production server running at 5% utilization, knowing that we have room to handle the spikes that will surely come.
We’ve seen CIO’s push back obliquely on this via virtual machines (VM). VMs have a lot of advantages, but in terms of reducing some of the waste due to over provisioning, they are reasonably effective. VMs are magic – more or less – to us, and this means that we still don’t have to learn to turn out the lights. To be fair, it makes sense to drive that behavior down the stack to the hypervisor, but there is a difference between designing for less usage and using a hypervisor to mask it. Chargebacks are another technique, pushing the cost back down to the department that is generating the demand (or that doesn’t care to optimize).
Short of chargebacks, we don’t think about costs a lot. Hardware is the ‘cost of doing business’ and it’s a capital expense, something everyone is used to and accepts. As we move to the cloud and the charge by the minute/gigabyte/service call/virtual machine model, we move to the land of operating expenses (OPEX), and every CIO out there actively manages those. I think we’re going to see a lot more thought going into trying to use services that charge per use versus provisioning virtual machines that run all the time. Learning to scale up and scale down key resources effectively is going to matter. I can see the time coming when the CIO has the utilization chart up on a screen in a meeting saying, “why did it take us 30 minutes to scale down after the sale ended?”.
Spend a few minutes thinking about how your business might change if everything was an operating expense. Would it make it easier to get someone to agree to mark a table as compressed or get a miserably inefficient proc fixed? Will archiving data suddenly become interesting? Will you someday soon hear the CIO shouting down the hall to turn off the lights?