Virtualization And Your BC/DR Plan 2
Disaster recovery and business continuity are made easier by virtualization, but cost and complexity are holding back users.
When we set out to look at the use of server and desktop virtualization in business continuity and disaster recovery strategies, the last thing we expected was to have to make a case for adopting a BC/DR plan. But of the 681 business technology professionals who responded to our InformationWeek Analytics Business Continuity/Disaster Recovery Survey, 17% have no BC/DR plan, and 20% are still working on one.
Cost and complexity are holding most of them back. "DR is a bear to get people to spend money on and difficult to justify--except right after a data loss," says one respondent.
So how best to overcome the obstacles to continuity planning? First, focus on getting the business to drive the project, enabled by IT. And then, leverage the latest technologies; virtualization, in particular, will boost ROI, cost savings, and resource efficiency levels.
Why Virtualize?
There are all the usual reasons: Virtualization consolidates server infrastructures, thereby cutting power, cooling, floor space, and other costs. It lets you reduce the number of servers you use so that the data center doesn't sprawl out of control. It also provides quick provisioning capabilities so you can respond to project requests faster and meet sporadic utilization spikes.
In addition, hardware is advancing at a faster rate than software designers are able to keep up. As a result, applications underutilize the process and memory support available in most of the equipment out there. Take a quad-core server with four or six sockets. This is more than what a single instance of Exchange 2007 can utilize. Therefore, it's beneficial to virtualize; even if you get only a 2-for-1 ratio, it's still better than overcommitting expensive hardware to applications that can't take advantage.
Virtualization can also be used to solve high-availability issues on the local LAN, minimizing or even eliminating server failure brought on by faulty hardware. Yesterday, when a physical server went south, rebuilding could take two hours at best, more likely four or more. Virtualization can leverage high-availability technologies and put a server back in production in minutes. With the right technology in place, you can even eliminate downtime altogether. Here are three HA options:
• VM restart on another host. Because virtual machines are a collection of files that aren't bound to any specific hardware, if the host fails, it takes only minutes to power on that VM on another server.
• Traditional clustering. You can extend clustering technology to VMs. Setup is often a bit easier, especially when dealing with networking.
• Fault tolerance. When enabled, this technology runs primary and secondary VMs in lockstep. That means every process, task, and operation is executed on both VMs. They operate on separate hosts, so in the event of a primary VM failure, the secondary picks up exactly where the primary failed with no interruption.
Final Frontier: Leveraging Virtualization for BC/DR
Get This And All Our Reports
Become an InformationWeek Analytics subscriber for $99 per person per month, with multiseat discounts available, and get our full report on leveraging virtualization for BC/DR.
This report includes 32 pages of action-oriented analysis, packed with 16 charts.
What you'll find:
A look at the virtualization platforms respondents are using
Using VDI to drive remote user access in a disaster
All the data from this survey
About the Author
You May Also Like