Welcome!

Cloud Computing Trends

Greg Ness

Subscribe to Greg Ness: eMailAlertsEmail Alerts
Get Greg Ness via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Java in the Cloud

Blog Feed Post

Outages and the Weakonomics of Public Cloud

Are Amazon’s customers technology leaders - or mere laggards, simply trying to reduce IT costs to their bare minimum?

The latest Amazon outage again begs whether or not Amazon’s customers are technology leaders or laggards simply trying to reduce IT costs to their bare minimum. 

Certainly some enterprises are properly using the public cloud for key projects where a non-critical application or service needs to be created quickly and within a low budget. 



Yet some companies seem to be using the public cloud for even predictable workloads, with the idea of minimizing costs even to the point of accepting higher risk.  This comment from Michael Lee at ZDNet:

“What this has means, though, is that several companies have looked at their bottom line, and decided that the cost to mitigate the risk isn’t worth maintaining 100 per cent uptime. Bettin said that these organisations tend to be small, and, in order to maintain any sort of profit, they have to be cutthroat with their costs. This is something that the cloud has enabled, but it also puts them at significant risk.”

If public cloud (IaaS) is so inexpensive relative to traditional IT or private cloud , then why would any enterprise take short cuts with the (inexpensive)  redundancy and backup enabled by the public cloud?  Note also that Amazon may have cut a few corners with their own data center infrastructure.  At least one Amazon data center went down because its backup generators didn’t start

From The Hidden Bugs That Made AWS Outage Worse:

“However, it was a variety of unforeseen bugs appearing in Amazon’s software that caused the outage to last so long: for example, one datacentre failed to switch over to its backup generators and eventually the stores of energy in its uninterruptible power supply (UPS) were depleted, shutting down hardware in the region.”

For all of the hoopla about the low cost for public cloud (see the Archimedius perspective at amazon-and-the-enterprise-it-monoculture-myth) the incidence and duration of unplanned downtime seems to suggest that the economics of public cloud may at times be less attractive than the marketing portrays; or at least the infrastructure being marketed as robust may not be as resilient as marketed, unless the customer pays extra.

This again goes to the 500kW threshold discussed previously… or, in other words, the business case for public cloud for predictable workloads may be so weak that it in some cases may require shortcuts to the detriment of availability.

Dr. Zen Kishimoto talks about the 500kW threshold theory when reviewing the recent Fire Panel on Cloud Computing:

“At this point, Greg interjected an interesting fact drawn from his discussions with area experts. When your power consumption is less than 500 kW, it makes sense to outsource your computing to a public cloud. But if your consumption goes over 500 kW, it becomes more economical to have your own infrastructures (i.e., private clouds) so that you can tune them for your computing needs.”

Zen also talked about the emergence of power costs as a strategic concern for IT, partially driven by the influence of public cloud operating models:

“To do this massive task, for the past 25 to 30 years, 80–85% of the cost has been dedicated to software and those who manage software. Cloud changes this. For example, companies like eBay and PayPal have only a few patterns, which cuts the cost of maintaining them. When you do this, your opex primarily becomes energy, and the issue becomes where you run your loads and how you optimize the power/infrastructure ratio.”

At 500kW power becomes a material (if not strategic) concern, which makes the data center electrical and mechanical design a material concern.  An analyst recently told me that at least one cloud provider was leveraging data centers built before 2007.  The question then becomes: how much are the customers paying for power, the largest variable cost for larger environments?  If that rate is too high the threshold for moving to a private cloud in an advanced data center could be as low as 300kW.

If you don’t have 30 minutes to view the FIRE cloud panel video I suggest Zen’s blog for a succinct, quick review of the key points.

Read the original blog entry...

More Stories By Greg Ness

Gregory Ness is the VP of Marketing of Vidder and has over 30 years of experience in marketing technology, B2B and consumer products and services. Prior to Vidder, he was VP of Marketing at cloud migration pioneer CloudVelox. Before CloudVelox he held marketing leadership positions at Vantage Data Centers, Infoblox (BLOX), BlueLane Technologies (VMW), Redline Networks (JNPR), IntruVert (INTC) and ShoreTel (SHOR). He has a BA from Reed College and an MA from The University of Texas at Austin. He has spoken on virtualization, networking, security and cloud computing topics at numerous conferences including CiscoLive, Interop and Future in Review.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.