The Rise Of Object Storage - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Partner Perspectives  Connecting marketers to our tech communities.
Commentary
4/6/2016
09:38 AM
Steve Bohac
Steve Bohac
Partner Perspectives
50%
50%

The Rise Of Object Storage

Scale storage with more flexibility without sacrificing performance (or adding complexity).

In a world where cloud computing is becoming the standard, one could think of “web-scale” storage architecture as being something of a “participatory democracy.”

More and more companies are becoming public cloud service providers. Don’t think Microsoft Azure or Amazon Web Services. Instead think of somewhat smaller businesses (relatively speaking): gaming websites, online retailers, and other traditional businesses that are using software-as-a-service (SaaS) to deliver new value to customers.

Even these smaller public cloud providers look back wistfully at a time when “gigabytes” were considered big. For them, even terabyte-scale is fast becoming history. They are looking at managing petabytes of data. And soon more petabytes. And later even more petabytes. You see where this is going.

Up Or Out?

With traditional storage installations, when most people think about adding more storage capacity to accommodate the need to store more data, the only option they typically have is to add more disks to their existing controllers. More disks equals more storage, right?

But this comes at a cost: Think about the performance “hit” they’re going to encounter as they add that many more disks. Those existing controllers don’t get any more powerful when new disks are added, but now there are more disks to manage, and those file systems become more complex and burdened with the additional storage that eventually will have more files under management. Most customers are not happy with that tradeoff; they want greater capacity without compromising performance or adding complexity. In fact, they often need to increase performance to accommodate different workloads.

Traditional storage appliances and architectures were simply not designed to achieve petabyte-scale. They’re just too rigid. Also, most of them were designed to lock you into the manufacturer’s architecture, so all future investments must be made with them.

This brings us back to our “participatory democracy” in which everybody pitches in for the common good.

When we move to object storage, we can free ourselves of the rigid file structure and hierarchy of traditional storage appliances. Object storage is especially optimized for petabyte-scale storage where the rigidity and shortcomings of those traditional file systems are exposed. Object storage architectures are “flatter,” without the file structure and hierarchy that becomes much more of a burden when capacities and numbers of files and/or objects get immense.

Additionally, object storage architectures offer much more flexibility. First, by employing a distributed data model, many resources can each handle a portion of the total load. In this way, we scale out amongst the entire storage installation, which allows us to increase capacity without impacting performance or adding disks. Should we want to improve performance, we add more incremental compute resources to the cluster, each offering additional performance should it be needed.

Improve Everything

As mentioned above, object storage offers great flexibility. When it is software-defined, we have wide-open choice in the hardware we use. Software-defined object storage can be tiered among inexpensive, moderate, and high-end storage hardware to correspond to archival (“cold”), seldom to moderately used (“warm”), and high-frequency (“hot”) data without requiring investments in different architectures.

You will also have many ways to access your data in object architectures as well -- from the S3 protocol to OpenStack swift service to traditional file interfaces like NFS and SMB/CIFS, you will have variety in the ways you can allow your clients (or customers in a cloud infrastructure) to access the data stored in your object infrastructure.

If managed appropriately, software-defined object storage could be purchased and added in small increments, approaching an operating expense (OpEx) model that is far more easily budgeted. Costs are reduced. Flexibility is increased. Agility and responsiveness are improved as scaling is simplified. Everything just gets better.

Under The Red Hat

Red Hat Ceph Storage was designed from the ground up to support object storage. Learn more at http://www.redhat.com/en/technologies/storage/ceph or call your Red Hat partner today.

Steve has more than 15 years' experience in product marketing and management serving enterprise customers working for industry leaders such as Red Hat, NetApp, Violin Memory, and HP. He has travelled around the world in his career and launched numerous storage and ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Commentary
CIOs Face Decisions on Remote Work for Post-Pandemic Future
Joao-Pierre S. Ruth, Senior Writer,  2/19/2021
Slideshows
11 Ways DevOps Is Evolving
Lisa Morgan, Freelance Writer,  2/18/2021
News
CRM Trends 2021: How the Pandemic Altered Customer Behavior Forever
Jessica Davis, Senior Editor, Enterprise Apps,  2/18/2021
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you.
Slideshows
Flash Poll