Turn VM Chaos To VM Control

Our 5 steps can help you tame your server virtualization environment.

Jasmine McTigue, Principal, McTigue Analytics

November 7, 2011

3 Min Read
InformationWeek logo in a gray background | InformationWeek

Virtualization has changed the way IT does business. From the smallest server closets to the largest data centers, the use of virtual machines makes it easy for IT to quickly deploy applications.

But to reap the benefits, you have to manage insidious productivity thieves, like server sprawl. That's when VMs are running on production systems without any good reason, sapping vital resources, including memory, disk, and power. What if these resources are suddenly needed? Say a physical server fails, and virtual machines attempt to move to another server, but it's clogged with unused VMs. As a result, applications can go down. Further, unused but active VMs can create unplanned network traffic, present additional attack surfaces, impose greater maintenance overhead, and add to software licensing costs.

If your virtual servers are getting out of control, use our five-step methodology to restore order.

>> Keep 'em separated: Step one is simple: Split virtual machines into test and production resource pools. This should reduce the number of unnecessary VMs on production servers and help assure that resources will be available when there's a failure. While this may seem like IT 101, in our experience, administrators seldom effectively draw a line between test and production environments.

>> Plan for failure, test the plan: Systems fail. A high-availability architecture is supposed to serve as a bridge so that critical systems can continue to run in the face of a problem. But HA systems require maintenance; ignore them, and they may not be there when you need them. Thus, it's essential to test your HA environment. Pull the power plug on a critical server running production VMs (backed up, of course). If you haven't done this test in a while, we guarantee you'll learn something new about how your infrastructure responds to failure.

>> Get ahead of the business: Don't honor business units requests to provision a new application without considering other options. Could an existing system used by another division meet their needs with some adjustments? The key here is timing. If IT staff who should have been involved in the selection process are instead brought in after business decisions are made, that's a problem. Once a deployment requisition comes in for a specific application, it's generally too late. Make sure IT is part of any application discussion from the start.

>> Improve visibility: One way to gain insight into your infrastructure is with a network management system that enables automation by checking for IT-defined variables and taking actions based on certain conditions and thresholds. Furthermore, as predictive analysis becomes more commonly used, a network management system can serve as a repository to keep key data validated, clean, and in one place.

>> Consolidate and automate: As server virtualization matures, the focus is shifting from pure consolidation to automation. And much of the automation within a virtual environment is driven by API scripting. For example, an API script might use a synthetic transaction to check application performance. If performance is poor, the script might spin up a new virtual machine in the application farm. This approach is convenient, but if your coding chops are rusty, consider a pro- gramming course to prepare your next-generation API scripting skills.

The Zen of Virtual Maintenance


Our full report "The Zen of Virtualization Infrastructure Maintenance" is free with registration.

This report offers action-oriented analysis on server virtualization. What you'll find:

  • How VM sprawl can affect high availability

  • Five steps to build a robust, automated virtual environment

Get This And All Our Reports


InformationWeek: Nov. 14, 2010 Issue

InformationWeek: Nov. 14, 2010 Issue

Download a free PDF of InformationWeek magazine
(registration required)

Read more about:

20112011

About the Author

Jasmine  McTigue

Principal, McTigue Analytics

Jasmine McTigue is principal and lead analyst of McTigue Analytics and an InformationWeek and Network Computing contributor, specializing in emergent technology, automation/orchestration, virtualization of the entire stack, and the conglomerate we call cloud. She also has experience in storage and programmatic integration.

 

Jasmine began writing computer programs in Basic on one of the first IBM PCs; by 14 she was building and selling PCs to family and friends while dreaming of becoming a professional hacker. After a stint as a small-business IT consultant, she moved into the ranks of enterprise IT, demonstrating a penchant for solving "impossible" problems in directory services, messaging, and systems integration. When virtualization changed the IT landscape, she embraced the technology as an obvious evolution of service delivery even before it attained mainstream status and has been on the cutting edge ever since. Her diverse experience includes system consolidation, ERP, integration, infrastructure, next-generation automation, and security and compliance initiatives in healthcare, public safety, municipal government, and the private sector.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights