More modular combinations of virtualized compute, networking, and storage, such as VMware's EVO:Rail, are on the way.

Bhargav Shukla, Director, Product Research & Innovation, KEMP Technologies

October 30, 2014

5 Min Read

Editor's note: Shukla's firm, Kemp Technologies, is at work with both Microsoft and VMware to integrate its LoadMaster load balancer products, which are still in development, with Microsoft HNV and VMware NSX network virtualization offerings.

Virtualization promised to change the landscape of computing, and it certainly has.

Virtualization revolutionized how we look at compute resources by abstracting software from its underlying hardware. Compute resources, now pooled, improved efficiency. Deployment and management of physical resources became much more streamlined. No more dealing with downtime and unhappy users when replacing old hardware.

Along with these benefits, however, virtualization created a new challenge: virtual machine proliferation. Grappling with the side effects of such proliferation means optimization of VM lifecycle management including provisioning, maintenance, management, and de-provisioning.

As with any new IT advancement, the solution to one problem created opportunities for further advancement. All the different entities, such as servers, networking hardware, storage, etc., remained independent of each other and posed challenges such as time to deploy a new unit of computing within virtualized infrastructure. Some custom-built "virtualization-in-a-box" solutions, such as VCE's Vblocks, reduced "unbox to run" time from days to hours or less. However, in general the issues remained unresolved -- until the recent push towards hyper-convergence.

[Want to learn more about VMware's foray into hyper-convergence? See VMware Debuts EVO:Rail.]

During VMworld 2014 in August, VMware produced an example of hyper-convergence: the EVO: Rail combination of virtualization software loaded onto four blade servers, sliding on a rail into a 2u space of a server rack. It represents compute, storage, and networking in a single modular unit. Each unit has its own switch and the units can be built up into a larger cluster. Other forms of hyper-convergence are sure to follow.

So what does hyper-convergence really mean? What problem does it promise to address?

The idea behind hyper-convergence is to abstract other layers of infrastructure for simplicity and elasticity. Software-defined networking (SDN) brings the benefits of standardization to network virtualization and reduces costs in hardware and management layers. SDN with open networking standards also promises cross-platform compatibility. Similarly, virtual SAN promises to abstract storage infrastructure from management dependencies.

Virtualization software such as VMware's vSphere pooled physical servers, networking equipment, and storage arrays to work in tandem. However, each element still falls into a different management context and is usually managed by separate teams.

By bringing compute, hypervisor I/O control, and storage entities together, hyper-convergence lets administrators manage pools of compute resources, disk assignments for hosts and virtual machines, and network underlay configuration from a single unified management interface.

The benefits of hyper-converged infrastructure are faster deployments, the ability to quickly expand capacity, automated deployment of most infrastructure components, and reduced external dependencies. (You still need to plug in the power, though.)

While converged systems typically consist of separate components that are designed to work well together, hyper-converged systems are truly modular, resulting in a scale-out approach as opposed to a scale-up one. While this approach brings benefits, there are also potential drawbacks. For example, storage consumption typically outpaces compute consumption, so if more storage is needed while compute needs are relatively static, a new module must be added to the infrastructure. This runs slightly counter to the move to an OpEx-focused model. That said, popular opinion is that the pros outweigh the cons.

VMware offerings such as EVO: Rail provides customers with certified solutions from hardware vendors, built to specifications and shipped as a single unit, so deployment becomes a simple task of rolling the solution from a truck to the datacenter and plugging it in. You can go from powering up the rack to the first VM deployment in a matter of minutes. Nutanix has been a leader in this space, providing highly scalable building blocks for modern data centers.

With hyper-convergence, networking can develop certain challenges. As more enterprise and service provider networks head towards private cloud, public cloud, and hybrid deployments, network infrastructures increasingly require the use of virtualized network technologies such as VMware's NSX network virtualization and Microsoft's Hyper-V Network Virtualization.

While these technologies promise great flexibility in multi-tenant networks, they also require new thinking for old devices such as network switches, gateways, and application delivery controllers. With more isolated or bridged virtual and physical networks, the challenge becomes locating the actual resource being accessed. How does a switch or router know that a requested resource is outside the physical network boundary? How does it know which server hosts the destination virtual machine? How does it know which of the many virtual networks the virtual machine belongs to?

In the case of application delivery controllers and load balancers, the technologies must go a step further to become more valuable. If the application delivery controller device is not aware of the underlying virtualized networks, it can work only within the boundary of a given virtual network, blind to any other virtual and physical networks that may exist under same management context. It can offer application delivery services such as load balancing, health awareness, content switching, caching, and compression only to clients and servers that exist within a given virtual network boundary. What if hosted infrastructure is part of the isolated tenant? What if the tenant network is only accessible via a virtual network-aware gateway? Will the service provider be required to deploy at least one ADC per tenant network? If so, how can one benefit from promised efficiencies? As usual, the benefits of any hyper-converged infrastructure are limited to the capabilities of its weakest link.

Ultimately, this means that vendors of networking devices such as network switches, routers, or application delivery controllers cannot afford to ignore the developing trend. Without offering deeper integration with hyper-converged infrastructure and awareness of network overlay technologies such as NSX and HNV, strong network solutions risk becoming obsolete and being replaced.

I see a new cycle starting. One solution to a problem creates many new opportunities. Savvy IT managers will look for data center components that can take into account the new hyper-converged infrastructure.

You've realized the easy gains from SaaS. Now it's time to dig into PaaS, performance, and more. Get the new Your Next Cloud Move issue of InformationWeek Tech Digest today. (Free registration required.)

About the Author(s)

Bhargav Shukla

Director, Product Research & Innovation, KEMP Technologies

Bhargav Shukla is director of technology and strategic alliances at KEMP Technologies. He is also one of very few people in the world to hold the prestigious Microsoft Certified Master certification for Microsoft Exchange 2010.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights