Cloud Computing's Portability Gotcha

Transfer fees can lead to lock-in as data stores grow.

John Foley, Editor, InformationWeek

November 26, 2009

3 Min Read

There were a couple of "aha" moments for me at the Interop conference's Enterprise Cloud Summit this month. The first was that some companies are already storing hundreds of terabytes of data in the cloud. The second was that it can be a slow and expensive process to move that data from one service provider to another.

The subjects came up in a panel on cloud interoperability, where the discussion shifted from APIs to cloud brokers to emerging standards. The panelists were Jason Hoffman, founder and CTO of Joyent; Chris Brown, VP of engineering with Opscode; consultant John Willis of Zabovo; and Bitcurrent analyst Alistair Croll. The gist was that we're still in the early days when it comes to cloud interoperability and that while Amazon's API may be the center of the cloud universe right now, it's hardly enough.

The discussion turned to portability, the ability to move data and applications from one cloud environment to another. There are many reasons IT organizations might do that: dissatisfaction with a cloud service provider, new and better alternatives, and a change in strategy, to name a few. The issue hit home earlier this year when cloud startup Coghead shut down and SAP took over only its assets and engineering team, forcing customers to find a new home for the applications that had been hosted there.

The bigger the data store, the harder the job of moving from one cloud to another. Some companies are putting hundreds of terabytes of data--even a petabyte--into the cloud, according panel members, and some of these monster databases are reportedly in Amazon's Simple Storage Service. Amazon's S3 price list gives a discount for data stores over 500 TB, so that's entirely feasible.

"Customers with hundreds of terabytes in the cloud: You are no longer portable, and you're not going to be portable, so get over it," Joyent CTO Hoffman said.

It can take weeks or months to move a petabyte of data from one cloud to another, depending on data transfer speeds, Hoffman said. And Amazon charges 10 cents per gigabyte to transfer data out of S3, which comes to $100,000 per petabyte. (That's after you've already spent $100,000 or more in transfer fees moving the data into S3.)

Amazon estimates it would take one to two days to import or export 5 TB of data over a 100-Mbps connection. It has in beta testing a work-around called AWS Import/Export that lets customers load or remove data using portable storage devices, bypassing the network. Amazon recommends that approach if loading data would take a week or more.

What's the lesson? Getting started in the cloud may be fast, cheap, and easy, but the longer you're there, the harder it is to move. As data accumulates, IT needs to monitor not just what it's spending on cloud storage, but also how big the tab to get out is. Price out an exit plan.

Read more analysis of cloud issues at PlugIntoTheCloud.com

Read more about:

20092009

About the Author(s)

John Foley

Editor, InformationWeek

John Foley is director, strategic communications, for Oracle Corp. and a former editor of InformationWeek Government.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights