How To Protect The Big Data Archive - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Cloud // Cloud Storage
Commentary
4/27/2012
10:49 AM
George Crump
George Crump
Commentary
50%
50%

How To Protect The Big Data Archive

Your big data disaster recovery strategy must consider size, cost, and the unpredictable, but inevitable, need to access old data.

Protecting a big data archive is different than protecting big data analytics--or it should be. While both types of big data are, well big, big data archives typically have the larger capacity. Full backups are not an option (too much data), and a disk-only backup strategy may not be realistic (too expensive). Big data archives must have data protection embedded into the architecture, as the data sets are too large to run a separate process.

As we discussed in a recent column big data archive environments are storing dozens of petabytes of content, often video, audio, or images. That content can sit idle for years and then a portion of it becomes active, triggered by some event, for a period of time. A common example: When celebrities die or get into trouble, video and photos from their past (that were infrequently accessed) become heavily requested. The size of these environments makes backup almost impossible, but the likelihood of accessing that data makes protection a top requirement.

The challenge presented by the data set's size and access pattern is such that creating a disk-only archive store may not be practical. The challenge is not one of capacity, as we discuss in our article "Is Your File Server Choking." Object-based storage from cloud infrastructure suppliers has eliminated the key scaling issues facing traditional file systems, adding near-infinite capacity and dealing with trillions of files. The challenge is the cost associated with providing a near-infinite capacity, disk-based storage system and backing it up via replication to another disk-based, near-infinite capacity system.

As a result, tape has a role, and potentially a prominent one, in big data archives. Archive file systems, like those offered by members of the Active Archive Alliance, have the capability to support tape as an integral part of the environment. What this means is that a large percentage of the active data sets can still be kept on disk for instant access, but can also be copied to a tape or two upon creation. The second tape can be used for backup and disaster recovery. This means "backup" happens as the data is created or modified, not as a single, separate, nightly process. As the disaster-recovery tape fills, it can be moved to a secure offsite location. It also means that tape can be the only location of that data, eventually scrubbing the data from disk.

This is not an "abandon disk" recommendation; it is a recommendation to be realistic. We suggest that you put as much data on disk as makes sense and can be afforded. Disk is needed for the data that will become hot, driven by the next world event. It is also needed for recently created data, since that is the most likely to be accessed.

Tape is needed to back up all the information and make sure the petabytes of old information that won't be accessed is cost-effectively stored. As we discuss in our article "Comparing LTO-6 to Scale Out Storage for Long-Term Retention," linear tape open (LTO) now has transportability, thanks to linear tape file system (LTFS), and LTO-6 brings massive capacity for pennies a GB.

When we are dealing with petabytes of data, moving it all at once--across even the best of networks--is not an option. Protecting these large data sets as they are being created or modified, and using the right combination of disk and tape, is going to allow these archives to store all the information required to make them useful for years to come.

Follow Storage Switzerland on Twitter

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.

InformationWeek is conducting a survey to determine where enterprises stand on their IPv6 deployments, with a focus on security, training, budget, and readiness. Upon completion of our survey, you will be eligible to enter a drawing to receive a 16-GB Apple iPad. Take our D-Day for IPv6 Survey now. Survey ends May 11.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
OmegaX
50%
50%
OmegaX,
User Rank: Apprentice
4/28/2012 | 8:05:44 PM
re: How To Protect The Big Data Archive
Making Backups of your data is key!

http://harddrivefailed.blogspo...
News
IBM Puts Red Hat OpenShift to Work on Sports Data at US Open
Joao-Pierre S. Ruth, Senior Writer,  8/30/2019
Slideshows
IT Careers: 10 Places to Look for Great Developers
Cynthia Harvey, Freelance Journalist, InformationWeek,  9/4/2019
Commentary
Cloud 2.0: A New Era for Public Cloud
Crystal Bedell, Technology Writer,  9/1/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Data Science and AI in the Fast Lane
This IT Trend Report will help you gain insight into how quickly and dramatically data science is influencing how enterprises are managed and where they will derive business success. Read the report today!
Slideshows
Flash Poll