In The Cloud, Goliath Generally Thumps David. Sorry, Underdogs

Shopping for enterprise-class cloud services? Don't be enticed by plucky upstarts with low prices and cool new features. Size matters.

Kurt Marko, Contributing Editor

July 10, 2013

6 Min Read

Not a week goes by that I don't get pitched by a hot new cloud service hoping to differentiate itself. Maybe it's unique features, better customer support, lower costs, a more flexible pricing model or some combination of the above that makes it superior to the hundreds that have come before. Excuse me if I'm not impressed.

I'm not saying cloud services, whether raw infrastructure-as-a-service compute cycles or full-blown software-as-a-service applications, don't still have a bit of the technology Wild West going on. There are few set product definitions and only loosely defined categories, so every entrepreneur who thinks he has a better mousetrap is wrapping that world-altering idea in a cloud-related business model, no matter how tenuous the connection. Part B of the pitch: Go to great lengths to point out how this new idea is absolutely different from and better than hundreds of predecessors.

The problem is, as the titans of the cloud industry expand their portfolios, lower their prices and offer tighter integration for app developers looking to build back-end infrastructure, it's getting hard to see any business case for all these newbies. The niche players are left carving up a smaller and smaller pie.

The question for IT managers trying to morph their internal systems and applications into Internet-style adaptive IT services is where to buy your tools: from cloud-era industrialists like Amazon, Google or Microsoft that consistently deliver well-understood features and service levels while simultaneously grinding out lower prices and innovative new products, or from an unknown startup claiming to understand and tailor to the particulars of your business vertical or segment. Do I choose Goliath, which I know and understand, if not love, or David, promising something new and served up with extra-special treatment, customizations galore and the same or better economics as his larger rivals?

Sadly, when it comes to the cloud, and frankly, business in general, David only wins in fables.

A good example of the developing dichotomy between established cloud services and emerging wannabes is a press release that surely passed through every tech journalist's spam folder recently. The company shall remain nameless -- its hubris is hardly unique, so it's unfair to make it a scapegoat for an entire group -- but its pitch claims it's "taking over both the cloud hosting industry and the startup space over the last few months" and goes on to tout "how the team is going toe-to-toe with both Amazon and Rackspace, and winning the coveted battle for the cloud."

Besides the questionable grammar, these statements are not only delusional, they're downright comical for their utter disregard of the facts.

Reality check No. 1: A recent report from Morgan Stanley pegs Amazon Web Services as a $2 billion business, on its way to $24 billion within 10 years. That's a compound annual growth rate in excess of 28%, for the math-challenged. And Morgan backs up the claim, pointing out, "Today, AWS offers amongst the most complete set of cloud services and generally the lowest prices, though competitors are racing to catch up." But the competitors it mentions aren't the small fry claiming to take over the cloud. They're the usual suspects: Google, Microsoft and Rackspace.

Reality check No. 2: Cloud services are rapidly being commodified. A look at the aforementioned cloud disruptor's product offerings shows nothing unique or particularly interesting save for pairing SSDs with every compute instance and some relatively aggressive pricing. The company's claim of subminute provisioning times is positively pedestrian given Google's recent demo (at Google I/O) of apps spinning up multiple GCE instances within 10 seconds.

Basic IaaS features for compute and storage instances are relatively comparable, and pricing is both extremely competitive and transparent. Many services have online price calculators, but a handy resource for comparing pricing across clouds and estimating three-year total deployment costs for arbitrarily complex cloud deployments is RightScale's PlanForCloud site. Using its numbers, which a few spot checks against the vendor's own figures confirm are accurate, the following is the monthly cost for roughly comparable x86 Linux systems used 12 hours per day:

-- AWS m1.Large (4x1.0 GHz, 7.5GB RAM, 850 GB storage): $96.72

-- GCE n1-standard-2-d (6x1.1 GHz, 7.5GB RAM, 870 GB storage): $98.58

-- HP Cloud large (8xCPU, 8 GB RAM, 240 GB storage): $104.16

-- Microsoft Azure VM Role Large (4x1.6 GHz, 7 GB RAM, 285 GB storage): $89.28

Amazon and Google are within 2% of each other, and there's only about a 16% spread among the four. It's not surprising, then, that the nameless up-and-coming cloud service does discount the big boys, as its large system with 8 GB RAM and 80 GB SSD (versus the competitors' larger but slower HDDs) goes for $80 per month. But as we reported last year, with AWS dropping prices for on-demand EC2 instances by 5% to 10% and then earlier this year slashing them by up to 26% the same day that Google cut its compute instance pricing by 4%, Amazon clearly means it when it says lowering prices is in its DNA and that it simply won't be undercut by what it considers a significant competitor -- that is, one with the scale and depth of cloud services and not just those selling cheap virtual machines.

Reality check No. 3: The depth and breadth of services, not just the cost per VM unit, is really, really important for enterprises remaking IT in an "as-a-service" model. When evaluating cloud providers, there's far more to analyze than price when your goal is actually running complex, multitier enterprise or customer-facing applications. AWS is clearly the leader in cloud service innovation, having built a product portfolio that's truly stunning in its range and variety of services. Want to automatically scale compute capacity in response to changing workloads? Its Auto Scaling service is the ticket. What about balancing load across multiple cloud instances, efficiently distributing rich content or relocating a massive data warehouse to the cloud? Amazon Elastic Load Balancing, CloudFront and Redshift have you covered. Have a job that can't be fully rendered into an algorithm or is better accomplished with a human touch? Mechanical Turk provides an efficient way to enlist people with the requisite skills, and agreeable to your terms, along with an interface to automate their responses into a crowdsourced resource pool for applications like photo/video processing, data validation and cleanup, or audio editing and transcription that are somewhere between prohibitively expensive and impossible to do via raw computation.

Even though the cloud services industry is still young, the low-hanging fruit has already been plucked. New entrants trying to compete on price or tailored services for specific markets are reaching for higher and higher, and thus sparser and sparser, limbs. CIOs remaking IT services and enterprise applications around the cloud, whether going all in with a public IaaS/SaaS strategy or building a hybrid infrastructure mixing public and private cloud services, are being courted by numerous suitors offering what are now commodity services at cut-rate prices.

But enterprise IT strategies are seldom built on cost alone. The differentiator is delivering innovative, customized services and applications that provide quantifiable benefit to the bottom line and that are backed by partners that you're pretty sure won't go belly up in a year. From this perspective, the cloud Goliaths with their rich and ever-expanding platforms trump the rock-slinging Davids hawking raw compute capacity on the cheap. Yeah, it's fun to root for underdogs. But are you going to bet your business on one?

About the Author(s)

Kurt Marko

Contributing Editor

Kurt Marko is an InformationWeek and Network Computing contributor and IT industry veteran, pursuing his passion for communications after a varied career that has spanned virtually the entire high-tech food chain from chips to systems. Upon graduating from Stanford University with a BS and MS in Electrical Engineering, Kurt spent several years as a semiconductor device physicist, doing process design, modeling and testing. He then joined AT&T Bell Laboratories as a memory chip designer and CAD and simulation developer.Moving to Hewlett-Packard, Kurt started in the laser printer R&D lab doing electrophotography development, for which he earned a patent, but his love of computers eventually led him to join HP’s nascent technical IT group. He spent 15 years as an IT engineer and was a lead architect for several enterprisewide infrastructure projects at HP, including the Windows domain infrastructure, remote access service, Exchange e-mail infrastructure and managed Web services.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights