Comments posted to this topic are about the item The Titanic Cloud
Follow me on Twitter: http://www.twitter.com/way0utwest
Forum Etiquette: How to post data/code on a forum to get the best help
My Blog: www.voiceofthedba.com
As an old timer, I see this as something that we all should be doing as normal operating procedures. When everything was part of a large in-house environment, smart companies had back/recovery procedures in place. In addition, if you were really smart, you also had system disaster and recovery practices which included hot-swap sites. It all boiled down to how long could you stay in business if you lost part or all of your IT services. I see today's cloud environment as no different, or maybe even more important since companies in a total cloud environment have even less control over their environment than before.
You need both, Steve. A good backup AND a good resume. 🙂
With electronic connections between businesses increasing, businesses will need a way to determine if a potential connection is reasonably safe. Eventually, some sort of audit and rating system will be needed so that businesses can selectively make these sorts of determinations. Different aspects of a potential connection's secureness could be evaluated including primary risks (physical location, hardware, security software,...), secondary risks (third-party connections, outsourced activities, end-user devices,...), operational policies (access procedures, upgrade frequency and process, scheduled and unscheduled downtime,...), etc.
I personally think that the IT "industry" should attempt to do some self-policing before governments mandate it. Having experience in various business and technical functions, I've seen the broad impact of the Sarbanes-Oxley Act and know that much of it has become window dressing with little real benefit (other than employment of auditors and consultants). A similar attempt in the IT world could be extremely costly and unproductive.
Barring a catastrophe of global proportions, I think it is theoretically possible to achive a 100% (or at least 99.99%) Always On environment, given that the company/service has the necessary infrastructure redundacies, a strategic geographical distribution of hardware, and fail-over processes that are effective and automated.
Unfortunately, sometimes companies make the promise of "little to no down time" without actually spending the money to realistically back it up. I guess the "little" part of the promise is the loop hole as "a little down time" can be a very relative term...
I do not know if PSN is considered true "cloud".
It affected 25 million SOE users for over 14 days, and who knows the count of users that where unable to access Netflix streaming.
Reading this article made me think of what a Titanic incident that has been for Sony.
Although I am sure they will be OK... 😉
I would say the tactical question here is using cloud as primary and having backups vs. having primary onsite with cloud as hot secondary with backups vs. cloud as primary and hot secondary (2 different cloud "vendors") with backups. The business perspective is up-time vs. budget vs control. The mix will be different for different companies, managers, and even company and IT cultures.
I think one of the things to understand with services like what Amazon offers is to know all of your options within that service. If you have certain services that are primarily in the cloud, explore all of your failover options and be ready to act should something happen.
I have given a name to my pain...MCM SQL Server, MVP
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
Viewing 8 posts - 1 through 7 (of 7 total)