• Mike Dougherty-384281 (8/2/2012)


    Steve, if anyone does share with you their big cloud operations please beg for permission to share the gist of it with us.

    One application that comes to mind for $2m/day computing costs is pharma research. If 770,000 cores can do in 1 day what on-site resources would do in a month, then the time saved is worth the money tradeoff. I would definitely like to hear about the type (and volume) of data as well as how/why it makes sense to outsource computation.

    I'll definitely try to share whatever I can learn. Some of the places I linked are examples of what I've seen. The big win seems to be the lack of investment needed for large scale computing. There's definitely a tipping point here, and I've seen this in the *Nix world before with large IBM machines where we had extra hardware in the machine that wasn't licensed to us. We could activate this for short periods as needed, paying a "rental" fee.

    As an example, in the 2001/2002 area, we had a large 64 CPU AIX server, but we were licensed for 36 CPUs. That's what we "bought". At end of quarter, we could "rent" an additional 10-12 CPUs for 2-3 days, with a license key. AIX allowed hot-add of the CPUs, so this worked well for us. Our calculations showed that this was worthwhile until we needed about 90+ days. Since we were looking at 8-10 days a year, it was better to rent the CPUs than buy them.

    I think that's what cloud computing gets you when it's done well. You can burst in those places you need to. If the load is steady, you probably do better with purchasing equipment at some point.