How To Beat 3 Disaster Recovery Roadblocks

Don't let data, WAN and integration challenges knock automated failover off course.

Jasmine McTigue, Principal, McTigue Analytics

February 15, 2013

4 Min Read

InformationWeek Green - Feb. 20, 2013

InformationWeek Green - Feb. 20, 2013


InformationWeek Green

InformationWeek Green

Download the entire InformationWeek February special issue on disaster recovery, distributed in an all-digital format as part of our Green Initiative
(Registration required.)


3 Disaster Recovery Roadblocks

3 Disaster Recovery Roadblocks

Not so long ago, InformationWeek surveys showed that many companies' disaster recovery plans were largely incomplete and unproven. For example, among 420 respondents to our 2011 Backup Technologies Survey, just 38% tested their restoration processes at least once a year for most applications. Only half backed up all their virtual servers every week.

Since then, things have improved, particularly the technology. This shift has come about because the applications that IT fields are increasingly central to business operations, and downtime means serious money lost. That translates into budget for business continuity and disaster recovery programs. Eighty percent of respondents to our new InformationWeek 2013 State of Storage Survey have strategies in place, and half of them test regularly.

The next step is to automate the process of failing over to a warm backup site -- one where hardware is up and running and data is regularly replicated from the production site. Removing people from the equation streamlines the process and lessens the possibility of error and costly delay.

We realize that many IT pros who priced an automation project just a few years ago came away with sticker shock. Between replication software, running systems in warm sites and bandwidth costs, bringing recovery times down from days to minutes costs more than most companies could justify. Implementing an automated recovery plan still isn't inexpensive, but prices have come down enough that, with some new technologies and careful engineering, we can often bring recovery times down to minutes for a reasonable price; we discuss some of these in our report on BC/DR and the cloud.

But while tech advances have handed IT pros a plethora of new tools to streamline failover to a warm site, complexities remain. Three areas in particular can derail automated disaster recovery: not having complete data sets in place for critical applications, a lack of bandwidth and incomplete integration.

Fresh, Hot Data

Research: 2013 State of Storage

Report Cover

Report Cover

Our full 2013 State of Storage report is free with registration. This report includes 53 pages of action-oriented analysis, packed with 45 charts.

What you'll find:

  • Technology and software trends

  • Six recommendations for the year ahead

Get This And All Our Reports

Getting an application up quickly at a warm site means that its data must be there, ready and waiting. In manual failover scenarios, data can be a little dated: A stakeholder decides on reasonable RPO and RTO (recovery point and recovery time objective) metrics and agrees that some data might be lost. But for automated recovery of applications to work, completeness and integrity of the data at the recovery site are critical.

Primary array replication is the best way to mirror data from one site to another without human involvement. However, the licensing and storage costs associated with replication have tabled many a failover project. In the last couple of years, we've seen a number of changes: The commoditization of enterprise storage, the emergence of upstart providers of appliances and software, and the introduction of managed replication services have dramatically driven down the cost, regardless of the platform or technique used. In fact, our State of Storage report shows the percentage of respondents using replication on a widespread or limited basis ticked up three points since last year, to 70%.

But replicating data to a warm site still requires bandwidth, and plenty of it, which brings us to our second roadblock.

To read the rest of the article,
download the InformationWeek February special issue on disaster recovery

Read more about:

20132013

About the Author(s)

Jasmine  McTigue

Principal, McTigue Analytics

Jasmine McTigue is principal and lead analyst of McTigue Analytics and an InformationWeek and Network Computing contributor, specializing in emergent technology, automation/orchestration, virtualization of the entire stack, and the conglomerate we call cloud. She also has experience in storage and programmatic integration.

 

Jasmine began writing computer programs in Basic on one of the first IBM PCs; by 14 she was building and selling PCs to family and friends while dreaming of becoming a professional hacker. After a stint as a small-business IT consultant, she moved into the ranks of enterprise IT, demonstrating a penchant for solving "impossible" problems in directory services, messaging, and systems integration. When virtualization changed the IT landscape, she embraced the technology as an obvious evolution of service delivery even before it attained mainstream status and has been on the cutting edge ever since. Her diverse experience includes system consolidation, ERP, integration, infrastructure, next-generation automation, and security and compliance initiatives in healthcare, public safety, municipal government, and the private sector.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights