The Subtle Push to the Cloud

  • Alex Gay (4/29/2014)


    There is no way that we in the NHS could use cloud computing in general, or Azure in particular, especially for databases that contain patient data. The risks of unauthorized access is too high to begin with and when you add to that the fact that a US judge could order Microsoft to grant access to your data (see here http://www.theregister.co.uk/2014/04/28/us_judge_digital_search_warrants_apply_everywhere/), we wouldn't be able to guarantee the required level of data security. The ISC (Information Security Commission) would gut us, it would probably cost more in fines for breach of patient data security than we would save in licensing and hardware, which is the point of the legislation.

    Don't bet on that. Amazon succeeded in getting many US governments into clouds, by building separate data centers and locating them in the US. It's possible we'll see an Azure government cloud located in the UK, for UK customers.

  • Ed Pollack wrote:

    ... if your product has an uptime requirement of 99.95, then Azure is immediately out-of-the-question as it promises only 99.9%.

    The difference in uptime, over the course of a year, between 99.95% and 99.90%, is 4.38 hours - which, if my arithmetic is right - is less than 45 seconds a day.

    I'm not sure this difference would be terribly meaningful to most customers. And I would question whether an in-house staff could maintain database uptime much superior to that offered by Azure.

  • GoofyGuy (4/29/2014)


    Ed Pollack wrote:

    ... if your product has an uptime requirement of 99.95, then Azure is immediately out-of-the-question as it promises only 99.9%.

    The difference in uptime, over the course of a year, between 99.95% and 99.90%, is 4.38 hours - which, if my arithmetic is right - is less than 45 seconds a day.

    I'm not sure this difference would be terribly meaningful to most customers. And I would question whether an in-house staff could maintain database uptime much superior to that offered by Azure.

    Hell, who has an ISP with 99.95% uptime?

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • GoofyGuy (4/29/2014)


    Ed Pollack wrote:

    ... if your product has an uptime requirement of 99.95, then Azure is immediately out-of-the-question as it promises only 99.9%.

    The difference in uptime, over the course of a year, between 99.95% and 99.90%, is 4.38 hours - which, if my arithmetic is right - is less than 45 seconds a day.

    I'm not sure this difference would be terribly meaningful to most customers. And I would question whether an in-house staff could maintain database uptime much superior to that offered by Azure.

    When the customers are internal and easy to work with and set expectations, that is completely, 100% true, but an unexpected outage to mission-critical software may not be tolerated, even if for only a few minutes---and certainly not for 4 hours.

    My big concern is the compounding of uptimes...99.9% uptime equates to 8.76 hours per year of unscheduled downtime. If Azure had an unexpected, full 9-5 outage one day, they would still be within their SLA.

    Is maintaining this level of uptime difficult? Sure---but for some business applications it is necessary, even if it seems a bit over-the-top!

  • Ed Pollack (4/29/2014)


    GoofyGuy (4/29/2014)


    Ed Pollack wrote:

    ... if your product has an uptime requirement of 99.95, then Azure is immediately out-of-the-question as it promises only 99.9%.

    The difference in uptime, over the course of a year, between 99.95% and 99.90%, is 4.38 hours - which, if my arithmetic is right - is less than 45 seconds a day.

    I'm not sure this difference would be terribly meaningful to most customers. And I would question whether an in-house staff could maintain database uptime much superior to that offered by Azure.

    When the customers are internal and easy to work with and set expectations, that is completely, 100% true, but an unexpected outage to mission-critical software may not be tolerated, even if for only a few minutes---and certainly not for 4 hours.

    My big concern is the compounding of uptimes...99.9% uptime equates to 8.76 hours per year of unscheduled downtime. If Azure had an unexpected, full 9-5 outage one day, they would still be within their SLA.

    Is maintaining this level of uptime difficult? Sure---but for some business applications it is necessary, even if it seems a bit over-the-top!

    Compounding is a HUGE issue that the C-Suite and IT departments don't seem to understand. Whether we are talking about securing the "cloud" or uptime, both of these have risks. With security, compounding means that we are less secure, with uptime it means we have less uptime. Why? Because any vulnerability we have is still present - PLUS every vulnerability the vendor has. The same goes with downtime, if either we or the vendor are down, we are down. So using Ed's example from earlier, we have to multiply one times the other. If both have a 99% uptime, the total is no better than 98%. If both have 97%, the result is only 94% or worse. I am using these numbers to show the effect - kind of like a bank showing compound interest for rates nobody has seen in years!

    Dave

  • djackson wrote:

    ... with uptime [compounding] means we have less uptime. Why? Because any vulnerability we have is still present - PLUS every vulnerability the vendor has.

    I would argue the same vulnerabilities exist both in the cloud and on-prem, and they're unpredictable regardless of the site.

    I would also argue the larger vendors have a level of expertise at least equal to that found in-house, in keeping the wheels turning, and that competitive pressures between the vendors result in incremental performance gains over time.

  • Don't bet on that. Amazon succeeded in getting many US governments into clouds, by building separate data centers and locating them in the US. It's possible we'll see an Azure government cloud located in the UK, for UK customers.

    Well Microsoft currently has a Azure data centre in Ireland and the Netherlands to satisfy EU legislation however the US government ruling effectively torpedoes the argument that Microsoft run an EU compliant could offering.

    As to governments hosting their resources in another governments cloud, that would be as daft as selling strategic energy resources to foreign powers.....eeeerrrrrrmm oh dear. Well I'm sure that nice Mr Putin will keep the gas supply intact.

  • GoofyGuy (4/29/2014)


    djackson wrote:

    ... with uptime [compounding] means we have less uptime. Why? Because any vulnerability we have is still present - PLUS every vulnerability the vendor has.

    I would argue the same vulnerabilities exist both in the cloud and on-prem, and they're unpredictable regardless of the site.

    I would also argue the larger vendors have a level of expertise at least equal to that found in-house, in keeping the wheels turning, and that competitive pressures between the vendors result in incremental performance gains over time.

    Not my point at all, although I agree with your point on competition.

    However, it is unlikely that both the customer and the vendor have the exact same vulnerabilities. It is likely they have different ones, although most might be in the same subset.

    Nonetheless, it is irrelevant. Why? Because if there are 10 vulnerabilities that the vendor has, and I have 15, then the total number is not 10! It can't be - to access their "cloud" I have to do so from my location, so any weakness I have is ADDED TO the vendor's list. If we both somehow had the same 10, it is still no more secure. If they patch one and we don't, we still have 10.

    More precisely, if the vendor has patched a flaw but we haven't, someone could use our weakness to break into our network and gather information. It is trivial to then jump to their cloud over the VPN or whatever connection we have. They can do so from our own servers or workstations.

    The same argument is even more obvious for downtime. Obviously if we lose our network for 4 hours on Monday, and theirs goes down for 4 hours on Tuesday, that is a total of 8 hours. If we both went down for overlapping times on the same day, the total would possibly be less that the two separate totals, but never less than the largest length of time.

    Dave

  • djackson wrote:

    ... if there are 10 vulnerabilities that the vendor has, and I have 15, then the total number is not 10!

    Of course! But what happens if your shop performs the services which would otherwise be farmed out to the vendor? Doesn't your shop take on those vulnerabilities as well? And can your shop perform those services with greater skill and alacrity than can the vendor?

  • I work for a SW company that is responsible for all sorts of clinical and financial data for nursing homes. You know those places that you stuff grandma and grandpa to die.

    Among our customers we had a contract with the state's nursing homes that shared a T-1 network with the state's public school's network as late as 2009. Their internet access was abysmal. They typically had problems printing the daily menu and the daily medicine list from our hosted RDP sessions. We finally convinced them to go self-hosted and they setup small servers at each facility to resolve the issue. (Our apps would work on a Dell Tungsten-1000.)

    We have since been bought out and are converting our clients to the new company's web app only version. I haven't gotten deep into the daily ops of the new company. But I'm sitting here and thinking of the decent to high end nursing home in Arkansas that is setup with generators, large propane tanks, their own on-property well and on the outside edge of the 80 mile tornado. What do you think their internet access looks like? Cell towers are going to be down. The collocated telephone exchange (PBX's) that are now a platform of cement on the ground. Oh yeah -- it's great they haven't lost data. But that data is totally useless if the prescription for two days of 7MG of warfarin needs to be dropped to 3MG today, pending the blood test. Doing 7MG for 5 days may get grandma to stroke out.

    Then to add on to all the melodrama above, anyone heard about Heartbleed. Do you want anyone to know that you are taking Viagra? How about digitalis when applying for life insurance? How about how to access your 401k to take out a loan?

    I'm trying to be careful about what I put on the internet even while being an avid user.



    ----------------
    Jim P.

    A little bit of this and a little byte of that can cause bloatware.

  • Steve Jones - SSC Editor (4/29/2014)


    Alex Gay (4/29/2014)


    There is no way that we in the NHS could use cloud computing in general, or Azure in particular, especially for databases that contain patient data. The risks of unauthorized access is too high to begin with and when you add to that the fact that a US judge could order Microsoft to grant access to your data (see here http://www.theregister.co.uk/2014/04/28/us_judge_digital_search_warrants_apply_everywhere/), we wouldn't be able to guarantee the required level of data security. The ISC (Information Security Commission) would gut us, it would probably cost more in fines for breach of patient data security than we would save in licensing and hardware, which is the point of the legislation.

    Don't bet on that. Amazon succeeded in getting many US governments into clouds, by building separate data centers and locating them in the US. It's possible we'll see an Azure government cloud located in the UK, for UK customers.

    This might work if it was an Amazon UK subsidiary, and wholly owned and registered in the UK, away from the problems of the US Courts attempts to overreach their bounds. There is already a site that lists those companies and services that are already authorized for use by UK Government (G-Cloud springs to mind).

    We also make use of services supplied from offsite data centers, but they tend to be hosted, and controlled, by the supplier of the software system and we don't have the administration rights that you get with a true cloud service, and have to ask them to supply us with an extract of our own data, usually at additional cost, if we want to move to a new solution. So this isn't really the same thing as a true cloud service where you would hire processing and storage capacity to use for whatever you want.

  • This talk is very intersting and many threads are very valid here is my stand:

    In cloud, DBA do not know how their data is taken care? who do have access to their data? If data/backup is not available for the recovery who is held responsible? As first sight people think DBA failed; finger will point to DBA first. Also there are lots of limitations of Azure. Personally I prefer to have non-cloud environment.

    ---------------------------------------------------
    "Thare are only 10 types of people in the world:
    Those who understand binary, and those who don't."

Viewing 12 posts - 31 through 41 (of 41 total)

You must be logged in to reply to this topic. Login to reply