Companies Push The Limits Of Virtualization - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Cloud // Cloud Storage
News
4/29/2009
06:00 PM
Connect Directly
Twitter
RSS
E-Mail
50%
50%

Companies Push The Limits Of Virtualization

New software, hardware, and networking systems let IT managers stung by the recession go further with server consolidation.

The Limits
Ulmer encounters three severe limits on how many VMs a server can run: CPU, memory, and I/O. To maximize the number of virtual machines you're running, his advice is to first maximize your servers' CPU count and memory.

The Sun Microsystems Sun Fire X-4600 servers he uses, designed by former Sun chief server architect Andy Bechtolsheim, max out what the Advanced Micro Devices Opteron chip can do. Twenty-eight servers are loaded with 128 GB of memory, a big number for their vintage; six have 256 GB of memory, going far beyond most of the current generation. In their next generation of Xeon 5500 servers, IBM and HP plan to put a maximum of 128 GB and 144 GB of memory, respectively, on their servers.

Ulmer's in the process of upgrading his 128-GB server memories to 256 GB. "Memory is the weak link. If you suffer memory depletion, it's the endgame for VMware's hypervisor" as it slows to a crawl, he says.

Ulmer's four- and eight-way Sun Fires are each equipped with dual- or quad-core Opteron CPUs; that's 16 or 32 cores per server. One of the few ways to make use of all those CPU cycles is by hosting multiple virtual machines. At 30 VMs per server, Ulmer's discovered that he's still only using 20% of available CPU cycles.

He's also bought specialized I/O hardware from a startup, Xsigo, which early on saw I/O as a potential bottleneck in virtualization. Xsigo puts converged network adapters on the server to move network and storage traffic coming from the VMs off the server and into an I/O Director, a hardware device that splits traffic up and sends it to its correct destination using 10-Gbps Ethernet pipes.

Most virtualization users let the hypervisor handle VM networking traffic, and that's a big constraint. VMware customers rely on the ESX Server's vSwitch, software in the hypervisor to route network traffic, which has a much greater impact on CPU resources than does bypassing it in favor of dedicated hardware. When network traffic appears, the hypervisor stops what it's doing, clears application instructions and data from the chip pipelines and buffers, and lets the vSwitch decide where to send the traffic. Frequent packet flows to other virtual machines, network routers, and storage will mean frequent interruptions of VM processing and slower operation. Ulmer believes off-loading network traffic from the hypervisor is one of the keys to increasing the number of VMs a server can run.

Because there's so much switching intelligence inside the Xsigo I/O Director, Ulmer uses just two cables from his heavily virtualized servers to the I/O device. Without the hardware device, he says, he'd end up with "a spider's den" of cables. "There'd be so many cables, I believe we'd run into human error," he says.

The I/O Challenge
If Ulmer's right and I/O is the next bottleneck holding back the number of VMs that can run on a server, then Cisco may have stolen a march on IBM, HP, Dell, and Sun as it brings converged network traffic to the virtualized server. In effect, with its Unified Computing System, Cisco is promising to do in Cisco servers and 10 Gigabit Ethernet devices what the Sun servers and Xsigo do with their own combination of hardware.

IBM and HP dispute that Cisco has gained an edge. There's no significant advantage to Cisco's approach, says Gary Thome, director of strategy and architecture for HP's blade group. HP doesn't see the data center as "a network with servers hanging off the end," he says, taking a swipe at Cisco's network orientation.

4 Areas To Watch When You're Adding VMs
1. Management Tools
Make sure you can see all the VMs that are running and find any sleeping ones.
2. Policy Migration
You must have a way to allow security, privacy, and compliance policies to follow your VMs when they move from one server to another.
3. Server Resources
Intel and AMD are adding cores to their latest chips. Make sure your server has enough memory to take advantage of all the CPU power and don't let I/O cause any slowdowns.
4. Standards Issues
The Fibre Channel over Ethernet standard will ease I/O problems. But until it's available later this year, you'll be buying into a vendor's proprietary interpretation of what it will look like.
HP countered last month with its Matrix Orchestration Environment, describing it as a unified management interface for its HP BladeSystem Matrix that will use the Xeon 5500 chips. HP will virtualize I/O and consolidate network devices with its Virtual Connect Flex-10 Ethernet and 8-Gb Fibre Channel devices that come with the blade chassis. This will let customers consolidate their existing Ethernet infrastructures, Thome says, adding that "it's the first time customers can get a converged system without a rip-and-replace strategy."

IBM will announce its own blade architecture upgrade later this year and should be able to provide a converged I/O blade without requiring customers to use nonstandard networking devices. It will do no good to multiply the number of CPU cycles if virtual machines sit idle as the hypervisor laboriously processes Ethernet packets. The goal of any high-powered blade platform is "to build a balanced system," says Rob Sauerwalt, strategic director of marketing at IBM.

For its part, Cisco has worked with VMware to produce VN-Link, a proprietary virtual network link protocol that's been built in firmware in Cisco's UCS 6100 Series Fabric Interconnect or in switching hardware outside the blade. The 6100 Series has the management intelligence to work with VMware's vNetwork Distributed Switch, so a cluster of hypervisors can feed undifferentiated VM network traffic through the distributed switch to the converged network adapters on Cisco's blades. The adapters feed the traffic to the Cisco Fabric Interconnect, where another pre-standard protocol--Cisco's implementation of 10-Gbps Fibre Channel over Ethernet--routes it to storage or data network devices.

Essentially, Cisco has virtualized I/O outside the hypervisor. It's created high-speed Fibre Channel and Ethernet channels that can be shuffled around to meet the needs of high I/O traffic VMs rather than assigning each VM a fixed resource. Cisco's servers should be able to deal with higher volumes of network and storage traffic and VM communications with less impact on core performance. Ultimately, that can help put more VMs on a blade.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Previous
2 of 3
Next
Comment  | 
Print  | 
More Insights
Commentary
CIOs Face Decisions on Remote Work for Post-Pandemic Future
Joao-Pierre S. Ruth, Senior Writer,  2/19/2021
Slideshows
11 Ways DevOps Is Evolving
Lisa Morgan, Freelance Writer,  2/18/2021
News
CRM Trends 2021: How the Pandemic Altered Customer Behavior Forever
Jessica Davis, Senior Editor, Enterprise Apps,  2/18/2021
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you.
Slideshows
Flash Poll