Sizing Up The Unified Computing Stack Wars

Server vendors are now armed with formidable network and storage offerings, and software vendors have acquired hardware companies. But should you tie your data center's future to an integrated vision?

Kurt Marko, Contributing Editor

April 15, 2011

4 Min Read

By 2012, 90% of organizations will have virtual machines in production, at least among the respondents to our latest InformationWeek Analytics Data Center Convergence Survey. Next we can expect boundaries between server, network, and storage to dissolve, and that means the fight is on for vendor hegemony within next-gen converged data centers.

What started as a skirmish when Cisco entered the server market against the likes of Hewlett-Packard and IBM has escalated into a wider conflict spanning product markets and featuring competing architectural visions. Disputes rage over the centrality of server vs. network vs. hypervisor management console, and the role of blades vs. standalone servers. While the big systems vendors are pushing fully integrated, single-source offerings, Cisco and EMC are countering with their amalgamated vBlock configurations using a very different network architecture. Meantime, Dell is partnering with network startup Xsigo, preaching openness and standards.

You'd think conflict and competition of this magnitude would be good for IT's bottom line, but that's far from a sure thing. Case in point are the new unified computing stacks, such as Cisco UCS and HP Matrix. They're pitched as instantly delivering a fully virtualized application platform, but they could end up enhancing vendors' bottom lines more than your data center operations. Think of them as mainframes for the virtualization era-systems designed to bootstrap the design and deployment of fully virtualized, cloud-like-application environments. Mainframes were great for uptime, but inexpensive and flexible? Not so much.

It's something of a Faustian bargain: a tightly integrated network, compute, storage, and management platform with cloud-like agility in exchange for your commitment to a single vendor or OPEC-like consortium. Sure, you get great insight into your operations, but that knowledge comes at the cost of proprietary blade chassis, nonstandard network interfaces, and proprietary management software.

Granted, piecing together a data center environment using best-of-breed hardware from multiple suppliers is a daunting undertaking best attempted by large enterprise IT teams. But don't believe you need to swallow a single-sourced unified stack whole. Even resource-constrained CIOs can build a virtualized data center by starting with a subset of a unified stack and inserting third-party hardware and software where it makes sense.

The first thing to remember is that these stacks are a reaction to pervasive virtualization. It's not enough to merely run VMs anymore, since the CPU is only one piece of the application's resource ecosystem. Each virtualized app has unique network and storage requirements, and delivering fully on the promise of virtual servers means these too should come from virtualized resources.

Ideally, each application, server, network, and storage configuration will be set in a single software profile and instantiated from a shared, software-controllable resource pool. Some might call this a private cloud (though clouds also connote end user self-provisioning, which is still a bridge too far for many organizations). But whatever the appellation, implementing this vision requires all of these physical assets to work well together and under a shared management software stack. That's your baseline.

Second, no matter what your sales reps say, vendors realize that integrated stack products can't be one-size-fits-all. HP, for example, offers a prebuilt "Starter Kit" that includes the essential hardware and software elements for a basic Matrix environment. Configurations then scale to petabyte-size storage arrays and terabit-per-second switch backplanes.

You will have to accept some constraints. Disputes between blade believers and rack-mount devotees are moot, since, like it or not, all of the integrated offerings are based on blades. And while the unified stack products and reference architectures address storage, it's still something of a stepchild as the main effort is on connecting to external arrays and incorporating storage administration into a comprehensive management console. The more significant and contentious technology arguments center on networking, where the battle line is between converged multiprotocol fabrics and more conventional, virtualizable switch modules supporting both Ethernet and Fibre Channel. Your choice here willl depend mostly on existing investments in FC gear and expertise. Every vendor wants to be your one-stop shop for data center hardware, so buyers stand to benefit from the competition. Don't be shy about negotiating price and customization.

In our "InformationWeek Analytics Live session at Interop Las Vegas on Thursday, May 12, called "Unified Computing Stack Wars," we'll untangle the hash of network topologies, storage architectures, and procurement models to help attendees come to grips with unified computing. Kurt Marko is an InformationWeek Analytics and Network Computing contributor and IT industry veteran, pursuing his passion for communications after a career that has spanned the high-tech food chain, from chips to systems. He spent 15 years as an IT engineer and was a lead architect for several enterprise-wide infrastructure projects, including the Windows domain infrastructure, remote access service, Exchange email infrastructure, and managed Web services.

Read more about:

20112011

About the Author(s)

Kurt Marko

Contributing Editor

Kurt Marko is an InformationWeek and Network Computing contributor and IT industry veteran, pursuing his passion for communications after a varied career that has spanned virtually the entire high-tech food chain from chips to systems. Upon graduating from Stanford University with a BS and MS in Electrical Engineering, Kurt spent several years as a semiconductor device physicist, doing process design, modeling and testing. He then joined AT&T Bell Laboratories as a memory chip designer and CAD and simulation developer.Moving to Hewlett-Packard, Kurt started in the laser printer R&D lab doing electrophotography development, for which he earned a patent, but his love of computers eventually led him to join HP’s nascent technical IT group. He spent 15 years as an IT engineer and was a lead architect for several enterprisewide infrastructure projects at HP, including the Windows domain infrastructure, remote access service, Exchange e-mail infrastructure and managed Web services.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights