Don't Bring Old Thinking To A New Disaster Recovery Model

Private and public clouds can make high-quality DR/COOP more affordable -- if you don't undermine yourself with outmoded assumptions.

Michael Biddick, CEO, Fusion PPT

December 2, 2011

3 Min Read

InformationWeek Digital Supplement: BC/DR - December 2011

InformationWeek Digital Supplement: BC/DR - December 2011

InformationWeek Green

InformationWeek Green

Download the December 2011 InformationWeek BC/DR Digital Supplement , distributed in an all-digital format as part of our Green Initiative
(Registration required.)
We will plant a tree for each of the first 5,000 downloads.


Michael Biddick

Michael Biddick

Last week, I toured a slick new facility where businesses can outsource hosting of their private clouds. Only one problem: It plans to, at least initially, use off-site tape for disaster recovery and continuity of operations (DR/COOP). Seriously? The whole point of a private cloud is redundancy, elasticity, and seamless performance, not attributes we associate with tape.

Now, this company has the right idea overall. Private clouds, whether hosted in your data center or at a provider's site, are a smart way to minimize the risk of data loss--if you can extend private cloud principles to often hidebound DR/COOP plans.

There are a few areas to address in making sure a provider can deliver when things go wrong. Start with the basics: Fully redundant data center operations mean a second, geographically segregated facility with adequate WAN capacity and processes to ensure that systems and applications fail over and restore correctly. Drill the provider on the particulars: How soon after loss will data restoration occur? Problems like system failure may be straightforward, but what about when someone accidentally deletes a presentation an hour before the CEO goes on stage? Once restoration is requested, how long will it take? How long will backups be retained? What recovery points can the provider deliver at a price you can afford?

A critical element of any disaster recovery effort is regular and realistic testing. Have your key staff members access mission-critical systems and execute their job functions to make sure everything works. Because companies grow and change, the plan needs to be a living document, reviewed and revised often. But testing is often forgotten (read: no one has the stomach to insist on it) or performed on a limited basis, with mixed results. Sometimes, IT is afraid that testing will cause a service outage. Maybe it will, but that will expose problems better found before an event that disrupts services and it's too late to make fixes. As the saying goes, if you think education is expensive, try ignorance. When staffing a private cloud initiative, carve out a role focused on DR/COOP, and include testing results in performance reviews.

Legacy apps often don't adapt to virtualized private clouds, and rarely do I see a full appreciation of the investment required to protect these clunkers. "Unique" business requirements are often given as reasons for not conforming to the standards necessary to do DR/COOP in the cloud. CIOs must wield an iron fist when it comes to legacy apps, because decentralized budgets and scattered power bases are the enemies of unified business processes. Again, this may not be part of the overall DR/COOP plan, but CIOs who require special approval for retaining legacy apps and are involved in application rationalization will be most successful.

Discouraged? Don't be. While the costs can be high, cloudifying your first-line defense against business interruptions can deliver a more resilient infrastructure, with better physical security and redundant power, telecom, and data links. And it's not as if maintaining a separate DR site, replete with miles of tape shipped to a vault and a "testing strategy" that involves checkboxes in a binder, is all that cheap, either.

Read more about:

20112011

About the Author(s)

Michael Biddick

CEO, Fusion PPT

As CEO of Fusion PPT, Michael Biddick is responsible for overall quality and innovation. Over the past 15 years, Michael has worked with hundreds of government and international commercial organizations, leveraging his unique blend of deep technology experience coupled with business and information management acumen to help clients reduce costs, increase transparency and speed efficient decision making while maintaining quality. Prior to joining Fusion PPT, Michael spent 10 years with a boutique-consulting firm and Booz Allen Hamilton, developing enterprise management solutions. He previously served on the academic staff of the University of Wisconsin Law School as the Director of Information Technology. Michael earned a Master's of Science from Johns Hopkins University and a dual Bachelor's degree in Political Science and History from the University of Wisconsin-Madison. Michael is also a contributing editor at InformationWeek Magazine and Network Computing Magazine and has published over 50 recent articles on Cloud Computing, Federal CIO Strategy, PMOs and Application Performance Optimization. He holds multiple vendor technical certifications and is a certified ITIL v3 Expert.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights