With proper preparation, cloud-based disaster recovery will enable your organization to weather the season of storms and power outages.

Brian Burns, Director, Cloud Services, Agile-Defense

July 9, 2014

4 Min Read
(Source: <a href="https://www.flickr.com/photos/paulmcenany/3081728160/" target="_blank">paulmcenany</a>)

Cloud Contracts: 8 Questions To Ask

Cloud Contracts: 8 Questions To Ask


Cloud Contracts: 8 Questions To Ask (Click image for larger view and slideshow.)

The season of power outages has arrived. We can expect coastal tropical storms and hurricanes and Midwest twisters and tornadoes to bring us a season of outages and, unfortunately, lots of loss. Government agencies, if not properly prepared, will see applications and data centers swept away with the same speed and suddenness of the wild weather winds -- even with so much advanced technology and outstanding preemptive tools and systems available.

It was only two years ago when Hurricane Sandy hit the East Coast, knocking out data centers from Virginia to New York to New Jersey. They lost public power and went dark for days, causing vulnerability and leaving lots of important information unavailable.

[Thinking ahead: Hurricane Relief Planners Use Mapping, Data Visualization.]

For government agencies that use their own internal data centers to house applications, public multi-tenant clouds offer a lower-cost, easy-to-deploy disaster recovery/continuation of operations (DR/COOP) solution. The steps below can help data centers plan and execute effectively with minimal to no disruption in the production environment.

1. Know your mission-critical applications. Determine which of your Web-based applications cannot go down for even a short (or extended) period of time. Identify these applications along with their dependencies and minimal hardware requirements to operate. Document your findings as these will become part of your DR/COOP plan and will help you when you move on to step two.

2. Choose a compliant cloud service provider or give a checklist to the one you have. Identify the right cloud service provider (CSP) that can support your business and technical requirements. If possible, choose a CSP that uses the same hypervisor that you use in-house. This will make mirroring a lot easier, faster, and cheaper in the long run.

3. Configure remote mirrored virtual machines. Depending on the hypervisor you currently are using for virtualization, either set up the data center to automatically mirror these virtual machines (VMs), or arrange to manually set up the remote VMs. Either way, make sure there is a mirrored VM for each production system that needs emergency backup.

4. Set up the failover to be more than just DNS. With the mirrored VMs tested and in place, it's time to select a technology that will handle the failover if and when a disaster occurs. When selecting this technology, avoid one that depends on a change to Internet domain name system records. While a DNS change will work, in most cases there will be a downtime of many hours or possibly even more than a day before users can reach the DR/COOP site. Therefore, seek a technology that can detect a failure in your primary data center and redirect end users instantly to the DR/COOP solution.

5. Perform regular failover tests. With the above steps complete, the final step is performing the end-to-end failover test, which must be routinely tested with the DR/COOP site. Depending on internal policies, this test may be as small as one application's individual failover, or you may wish to schedule a full site failover. Whichever is done, it is important to document the process, the steps taken when performing the test, and a clear record of results after each test is done. If your failover plan did not work, refer back to your documentation, identify what did not work as expected, make the adjustments to your plan (and documentation), and test again. You may need to do this multiple times until you have a bulletproof failover plan.

While we can't control Mother Nature, we can control our preemptive strikes against data disasters. A single emergency can take down a data center, but it only takes a simple plan and proper preparation to prevent disaster. Whether you bring the expertise in-house or outsource it, make the time and budget available to properly plan so you are not out of luck during the outages.

Most IT teams have their conventional databases covered in terms of security and business continuity. But as we enter the era of big data, Hadoop, and NoSQL, protection schemes need to evolve. In fact, big data could drive the next big security strategy shift. Get the 6 Tools To Protect Big Data report today (registration required).

About the Author(s)

Brian Burns

Director, Cloud Services, Agile-Defense

Brian Burns is the Director of Cloud Services for Agile-Defense Inc., a leading provider of cloud migration and day-to-day management services and IT for the Department of Defense and other public sector agencies. He has more than 17 years of technology and cloud experience and specializes in working with clients on FedRAMP compliance. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights