Why Billing Will Be Part of Our Job

  • Comments posted to this topic are about the item Why Billing Will Be Part of Our Job

  • "For some of us, the refocus on the cost of systems may give us some help in pushing developers to write better code from the beginning,"

    Somehow, teaching database design is out of fashion.  It needs to come back!  After all, SQL server databases underlie most commercial systems, and unless you know it reasonably well, then you can't judge when to leverage a SQL database or when to go No-SQL ?

    Being a UK polytechnic graduate from the days when polytechnics were worth going to for their good teaching, I was taught to create my SQL data in 3rd Normal Form (and even have a thumbed copy of the Date & Codd book to prove it).   However, for well over a decade, I've noticed that 3NF and ER diagrams are deeply out of fashion, and even those who graduate from good degree courses, are at best ignorant, and more commonly, disdainful of it.  EF is great, but it has it's limits, but in the world of Javascript, c# and .Net development, that's not commonly recognised.  Code First is King, but no one has the skill to model their data when it doesn't do a good job.  As I don't want my age to be recognised, I swallow my frustration with a "How about putting that in 3rd normal form, usually tools perform best when they're used as designed?"

  • Back at the beginning of my career, departments were charged for CPU time.  I had a boss who found out that charges were less after hours.  His plan was to have people work overtime in the evening to run things then.  So he was going to trade real dollars going out the door in payroll to save internal cross-charge-bucks from the department budget.  Payroll wasn't in the department budget, so he'd come out ahead.  Good full-picture view there, Bill.  (This was the same guy who borrowed department cars for "evaluation" so often that he forgot where his personal car was parked, and sent someone out to find it.)

  • As long as the management philosophy around development is "get it out the door as fast as possible" you will not see time spent on optimizing code up-front (though you have to do it later when systems are crawling... there's never time to do it right but there's always time to do it again... a vicious cycle).

  • I like the focus of this article. Improving code, making it more efficient, is a great goal. However I've been around long enough to not see much enthusiasm to improve code, either by management or by developers, unless something breaks. I'm working on new software now, where we're required to do SELECT * all the time. If the table has 400 columns, you bring everything back and display it all to the user in a data grid. Then we have to make a separate form so the user can indicate what columns they want to see. What it means is we still do SELECT *, but then filter it out before displaying the data in a data grid.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • In a traditional on-prem IT department, compute and storage are a capitalized expense (CapEx), so if  $500,000 is spent on a shiny new data warehouse server, then management wants to see it getting utilized (even if it's not doing anything particularly useful). However, with cloud hosted databases, we pay for units of resources consumed as an operational expense (OpEx), and management can (or at at least should) know exactly how much each application is costing the department relative to the overall department's billing.

    There is talk of self tuning databases that provide for things like auto-indexing and auto-scaling (vertically, horizontally, and back down again when not needed), but from what I've seen, there are no tools that write optimized SQL for the user - not today or in the foreseeable future. So, not only is refactoring and optimization still a job going forward - it's now it's quantifiable and even more front and center.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • >> I am old enough to remember when many large corporations implemented chargebacks. Essentially, each internal department was charged for their usage of IT systems, similar to how we are charged in the cloud. <<

    When Jimmy Carter was the governor of Georgia, he created the Department of administrative services (DOAS), which pool together common functions of all the state agencies and departments, such as motor pools, printing and IT services. The rule was that if you could do it with an outside service cheaper than DOAS then you could outsource it. The problem was that the agencies over the decades before Carter had an arrangement where they rotated IBM equipment among agencies because of the leasing arrangements that IBM used in those days.

    There was also another little problem with the software licenses. Each agency or department had its own software packages, so putting them into a single, central agency was a disaster (how many different packages for specialized purposes can you learn?). My personal experience as a DOAS employee, was being told to do something for the education department that involve their payroll. It involved my learning to use a package I had never seen before. So I called up the software vendor. I got a recorded message; "the number you have tried to reach is no longer in service" instead of a real person or something. I spent a few days trying to find former education department personnel who had use the package before the consolidation.

    The figure I remember from all those decades ago was the state of Georgia lost $5 million (back then that was a real money!) . Then Jimmy Carter failed upward is a bad governor of Georgia to become a bad president of the United States, to finally become a very remarkable admirable ex-presidents of the United States.

    Please post DDL and follow ANSI/ISO standards when asking for help. 

  • Turns out this has been quite a good prediction.

    The charge back system could have an unfortunate affect on corporate behaviours.  If a server was bought for a use case isolated to a particular department then, even if there was spare capacity, that department regarded it as "their" server.   After all, they'd been charged for it.

    To an extent the move to virtual servers made this less contentious though licensing costs still resulted in charges.

    I worked closely with an infrastructure manager who regarded the charge back system as being "wooden dollars".  In reality the organisation bought a server, not a department.  Some aspects of IT spend were not as easily isolated as you might think.

    In the cloud it isn't "wooden dollars" any more.  It is hard currency.  A cloud vendor gives you a bill, you pay the bill, bar a few discrepancies there is no wriggle room.

    Except in individual staff accounts (where costs are obvious because they are in a specific account) all infrastructure was deployed using either Cloud Formation (AWS) or Terraform.  Very early on we devised a cost tagging taxonomy where every cloud facility that supported tags had the taxonomy applied.  The cost breakdown isn't the output of some mysterious alchemy in Finance with all sorts of attribution formulae in Excel spreadsheets.  It is a measured and largely unarguable thing.  It can also be frighteningly large.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply