Will RAID Die In 2012?

The time it takes to rebuild a RAID-protected volume makes it unwieldy with today's high-capacity drives.

George Crump, President, Storage Switzerland

December 28, 2011

3 Min Read

RAID has become a staple of the modern day storage system, but as the number and capacity of drives in a storage system continue to increase, questions have risen about the viability of RAID. At issue is the amount of time it takes for a RAID-protected volume to rebuild itself after a drive failure. While in 2011 we saw many predictions of RAID's demise, it continues to be the protection algorithm of choice for most storage systems. Will 2012 be any different?

In Storage Switzerland's recent article What is RAID? we explain that RAID is a protection scheme that allows for volumes to have a drive failure and still be able to provide access to the data on that volume. The problem is that with today's drive technology the speed at which drives can be rebuilt is now measured in double-digit hours if not days. During this time performance can degrade and there is the risk of additional drive failure. If an additional drive fails beyond the RAID algorithms' allowance, then there is a complete data loss and recovery from backup software must begin.

There is also the reality that drives are more likely to fail as the capacity per drive increases. As drive capacity increases, so does the bit error rate (BER), which is essentially how much data can be read from a drive before you experience an unrecoverable read error. The BER ratio has stayed relatively the same while drive capacities have skyrocketed. A 2-TB drive is significantly more likely to encounter an error than a 1-TB drive when reading an entire drive, which is what happens during a RAID rebuild.

Given this combination of factors, it is likely that many large storage systems will be in a constant state of rebuild. Clearly the industry is dealing with this reality. We didn't abandon RAID 5 or RAID 6 last year. The most common "solution" has been to just live with the problem. Storage vendors can do this by making sure that there is enough storage controller processing power to provide adequate system performance while the rebuild is occurring. It would not surprise me to see some vendors allocate special standby processors to help with the rebuild process.

Another solution for RAID may be to use flash-based memory for all mission-critical data. While flash modules can fail just like hard drives, the performance of flash makes the rebuild process significantly faster. A rebuild of a flash volume protected by RAID is typically less than 15 minutes in our testing.

Eventually, though, we may just throw RAID out all together and go with an erasure coding algorithm or even more of a mirroring and replication strategy. After all, capacity is now inexpensive, and having a storage system that can automatically maintain x number of copies of data may be the simplest and most practical approach for data that is going to remain on a hard disk. This also gives you greater granularity by being able to set different levels of redundancy for different types or ages of data.

My expectation is that we will see a shift toward flash storage for mission-critical active data where RAID rebuilds will be less time-consuming and space efficiency is more important due to cost. Then we can use more of a replication, redundant copy strategy for older data stored on hard disk.

Track us on Twitter

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Read more about:

20112011

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights