After a couple of days in Las Vegas hearing all about 'cloud' this and 'fabric' that, a translation is in order.

Kurt Marko, Contributing Editor

May 11, 2011

3 Min Read

Cloud and fabric are the two main themes at Interop 2011, a UBM TechWeb event, and like every IT term that attains buzzword status, these two are overused and ill-defined. So after a couple of days here in Las Vegas, allow me to translate.

According to the generous definitions slung by everyone from keynote speakers to show floor hawkers, "cloud" means any software or virtual machine not running on your local network. Some of these offerings might once have been known as websites, remote displays, or timeshare applications, but cloud suits the zeitgeist, so that's how people are reframing their products.

Thus, streaming an application display from a remote server to a tablet is now an example of the cloud at work. Running enterprise applications on internal, virtualized servers is now a private cloud, and mixing on-premises applications with an online, public/shared service is a hybrid cloud.

While all of these examples share elements of a rigorous cloud definition--that is, a shared, virtualized application or server, sized according to need, priced according to usage, and provisioned by end users--they also differ in so many ways that lumping them under the "cloud" obfuscates things.

So when Interop attendees hear the word, they must resist the temptation to equate the touted product with their idealization of a cloud service. That caveat aside, the popularity of utility, cloud-based applications is showing up on the show floor.

Whether it's the ability of virtualized security appliances like Vyatta's to be provisioned using the CloudStack framework or the plethora of WAN optimization products like those from Blue Coat and Riverbed, the cloud, however loosely defined, is changing the product landscape.

"Fabric," meantime, is the nom de guerre for any network equipment that provides:
(a) a high density of 10-Gigabit or greater ports,
(b) sub-five-microsecond latency between ports,
(c) scalable aggregation requiring no more than two tiers between server and core,
(d) data and storage network convergence, i.e., FCoE, and
(e) the ability to manage and apply network policy to virtual ports as if they were physical.

Within these broad criteria, almost every major vendor at Interop is touting a fabric architecture. Alcatel-Lucent leads with its Best of Interop category-winning data center switching architecture. Cisco has Fabric Path (though it has chosen to emphasize its wireless LAN technologies this week). Extreme Networks is showcasing its BlackDiamond X8 192-port-by-40-Gigabit core switch. And Juniper puts hardware to its much-hyped Qfabric architecture.

While they differ in implementation details, which vendors can't resist highlighting to the disparagement of their competitors, they seem to be taking different paths to the same destination: a high-speed, low-latency, converged network designed for virtualized applications. Although vendors love to highlight raw specifications like the number of 10-Gig ports per chassis or maximum number of ports in a low-latency switch mesh, unless you're running a Google-scale data center, these differences are less important than how each vendor's design and management software handle virtual network ports and traffic.

But don't take my word for it. Check out the highlights on Interop TV and see the latest products for yourself.

Kurt Marko is a contributor for InformationWeek.

Recommended Reading: HP's Donatelli Comes Out Swinging At Cisco Virtualized Desktops The Next IT Challenge IT Must Play Central Role In Enterprise Innovation Inside Interop 2011 Hot Stage InformationWeek Analytics Presents: The Best of Interop 2011 Special Report: Interop 2011 Coverage See more by Kurt Marko

About the Author(s)

Kurt Marko

Contributing Editor

Kurt Marko is an InformationWeek and Network Computing contributor and IT industry veteran, pursuing his passion for communications after a varied career that has spanned virtually the entire high-tech food chain from chips to systems. Upon graduating from Stanford University with a BS and MS in Electrical Engineering, Kurt spent several years as a semiconductor device physicist, doing process design, modeling and testing. He then joined AT&T Bell Laboratories as a memory chip designer and CAD and simulation developer.Moving to Hewlett-Packard, Kurt started in the laser printer R&D lab doing electrophotography development, for which he earned a patent, but his love of computers eventually led him to join HP’s nascent technical IT group. He spent 15 years as an IT engineer and was a lead architect for several enterprisewide infrastructure projects at HP, including the Windows domain infrastructure, remote access service, Exchange e-mail infrastructure and managed Web services.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights