Simply buying more on-premises data storage without evaluating cloud-based storage alternatives could be a costly mistake.

Michael Fork, product manager at SoftLayer, an IBM company

December 4, 2014

5 Min Read

With nearly every enterprise's IT department under pressure to cut costs and improve responsiveness, one of the easiest and most basic moves is to use the current IT footprint more efficiently. But that is easier said than done. Or is it? Companies around the world are quickly recognizing the cost benefits of storing some types of corporate data in the cloud instead of taking up valuable floor space in the corporate data center.

Every enterprise has different needs and data storage requirements. Historically, the response to the need for expansion was buying more and more on-premises storage. But with the explosion in data, it is inevitable that almost every data center operator will face a pain point that makes the public cloud an attractive place to keep significant amounts of data.

As IT leaders face these pain points, they have to think critically about how much raised-floor, air-conditioned data-center space the organization can reasonably support to meet its immediate and future storage needs? Data center space is a scarce (and expensive) resource. If your data center is full, moving infrequently used data off the floor can free up valuable space. And, even if a data center isn't fully occupied, buying additional storage capacity might not be a good use of capital budgets, or it might require additional power and cooling. In this case, moving some data to a cloud-storage provider is often a better alternative.

[Want more tips on cloud storage? See Cloud Storage: 8 Ways You're Wasting Money.]

Rarely used "cold" data, such as archived copies of transaction records, is a clear candidate to be moved to a public cloud provider. Other good candidates are old log files that record activity on computers and networks for troubleshooting purposes, and rarely requested records and emails that are required to be archived for years. Another good option is file shares; often, only a small portion of the data is regularly accessed.

Another good reason for the cloud is to adhere to best practices for backing up data through the use of a separate site. Using cloud storage meets that requirement and is generally much more cost effective than building a second data center.

In other cases, new applications perform better if the storage is outside the corporate data center. Social and mobile applications depend on responsiveness. Missing an opportunity to engage a customer is missing an opportunity to grow the business. Cloud-based storage helps twofold: Storing the data nearer the customer improves responsiveness, and the cloud provides immediate scalability.

For example, imagine a mobile application that senses a customer's cellphone is within 100 yards of a shop and pushes a coupon for a discount mocha cappuccino. Every fraction of a second of latency into its delivery could mean the difference between a sale and a happy customer, or a customer who is disappointed because the notification arrived just after she needed to make a purchase decision.

Now imagine that same scenario, but the World Cup is in town, and you are doing this tens or hundreds of thousands of times. Not only does storing data on servers closer to the coffee shop (that are not constrained by the corporate data center) save precious time, but also most of the data generated "on the edge" doesn't need to be stored on the data center floor at all. Back-office systems do not need to know every checkin from the customer -- just significant events such as a notification generated.

Big data and analytics, two workloads that infrequently need access to large amounts of compute, are another place where cloud storage makes sense. By storing in the cloud the data that feeds into these processes, companies can take advantage of their elastic nature and rapidly provision the exact amount of compute capacity required to meet the demands of a seasonal business or a new analytics experiment. For a retailer, using cloud-based storage during the Christmas season would allow deep analysis of daily sales data without buying lots of disk drives and processing power that won't be used the rest of the year.

In most industries, companies find data difficult to move. Any insurer or hospital that stores personal health records has spent years developing policies and procedures that assure their privacy. Some government regulations require a CIO to have knowledge of the physical location of certain data, something that can be difficult in the cloud. Sensitive financial information needs to be tracked for years, and many companies believe it's safer to rely on their own techniques than to try to write service-level agreements that provide the durability and security assurances they need.

But even the most cautious CIOs need to fully understand the benefits they might achieve by exploiting cloud storage.

The first step should be to perform an audit and get a realistic assessment of the costs of storing a terabyte of data in the enterprise data center at the various storage tiers. That should include defining the internal SLA that outlines both the durability (chances of an irrecoverable loss) and availability of the data (percentage of time it is accessible). With this in hand, the CIO can directly compare on-premises operation to the cost and SLA of the public cloud providers.

With a clear understanding of the costs, an enterprise is in better shape to start taking advantage of cloud storage. Simply building more private data-center capacity without evaluating cloud-based storage alternatives could be a costly mistake.

Network Computing's new Must Reads is a compendium of our best recent coverage of storage. In this issue, you'll learn why storage arrays are shrinking for the better, discover the ways in which the storage industry is evolving towards 3D flash, find out how to choose a vendor wisely for cloud-based disaster recovery, and more. Get the Must Reads: Storage issue from Network Computing today.

About the Author(s)

Michael Fork

product manager at SoftLayer, an IBM company

Michael is the product manager at SoftLayer responsible for storage, including block, file, object, and archive offerings. Previously he was a lead architect in the CTO's office for Global Technology Services' Cloud Services.. As an early proponent of OpenStack, he has been involved in IBM's OpenStack strategy from the outset and serves as co-lead of the cross-IBM team managing technical contributions to the community. Michael was appointed both IBM Senior Technical Staff Member and Master Inventor in September 2014.

 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights