Cloud Vulnerabilities: Unpredictable Weather Says IT Issues May Be Inherent
Recent East Coast storms have brought to light a number of cloud vulnerabilities that may be tied more to technology's intrinsic structure than to anything IT pros can correct. For midsize IT admins, this raises questions of how much data they want to shift and what kind of mission-critical services (if any) they want to move cloudward--despite assurances from big providers like Amazon; is it possible that natural phenomena will prove to be the cloud's Achilles heel?
There are a number of issues surrounding the recent Amazon power loss in northern Virgina, which took down sites like Netflix, Instagram, and Pinterest. A recent article at the Daily Herald examines one such problem: power.
Despite its billing as a virtual system where backups are automatically redundant and data is never lost because it's spread across multiple servers, the fact is that even cloud servers are rooted in the physical world. To operate, these systems need power. It's true that if a section of the power grid goes down, other servers should pick up the slack (Amazon's Elastic Load Balancing was supposed to do this last weekend, though in many cases failed as well); what happens if power goes out on a larger scale? Sure, this could happen to a local data center with backup generators too, but just because there's more redundancy in the cloud, it doesn't make the system perfect. Virtual data on a system that doesn't run is no more useful to a company than data that's been deleted.
If nothing else, the recent storms are a wake-up call to both admins and companies like Amazon. For admins, it's a realization that relying on any technology to provide an "all eggs in the basket" scenario isn't a good idea, no matter its supposed redundancy. For providers like Amazon, physical failures showcase the need to increase reliability and may open the way for a new breed of power suppliers.
A recent Sys-Con Media blog post discusses the need for companies to evaluate power use as a criterion for public cloud use, rather than simply cost or agility. In some cases, the article argues, small and midsize businesses are feeling the cost-cutting pressure so acutely that they're willing to sacrifice uptime and redundancy in order to beef up their bottom line. Others are using the cloud for predictable workloads instead of on-the-fly app or service creation and might be better served by utilizing their own data centers.
The article examines the concept of the 500kW threshold, which several experts have suggested as the cutoff for determining if public or private options are the best choice. If a company has a power consumption of less than 500kW, it may make sense to outsource to a public provider, but anything more than that and it may be more economical to use a private option, both for uptime and cost reasons. It's also important to consider exactly what a provider is charging for the power to run cloud services; not all are running the newest, most power-efficient servers, and the kW mark could easily drop if power prices are too high or uptimes too low.
Midsize IT admins need to be cautious with any move to the cloud, and not just because of security. Perhaps even more important in the discussion of cloud vulnerabilities is the concept of power; how much will it cost to keep a server running, and how much uptime does it guarantee? Ultimately, admins will have to evaluate the reliability of any physical system, no matter its virtual guarantees.
This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet. Like us on Facebook. Follow us on Twitter.