Cisco Updates Data Center 3.0: New Switches And Architecture

Cisco has announced new products for its Data Center 3.0 program. The new products and features continue to enhance Cisco UCS platform with new servers and an improved interconnect technology, FEX Link, that offers increased capacity and redundancy.

Mike Fratto, Former Network Computing Editor

April 12, 2010

4 Min Read

Cisco has announced new products for its Data Center 3.0 program. The new products and features continue to enhance Cisco UCS platform with new servers and an improved interconnect technology, FEX Link, that offers increased capacity and redundancy. Starting from scratch, as Cisco did with the UCS, they have a somewhat easier time than competitors Dell, HP and IBM do in building up their product portfolio, since they have no legacy products to support -- yet. But it does mean buying into the while Cisco vision--servers, storage and networking--to really make a go of it. Dell, HP and IBM want you to do the same, but with their array of partners, including (in some cases) Cisco, you have more choice and the potential for a more customized and targeted overall solution.

Cisco is pushing the concept of any port, any server, anywhere in the data center. In a way, abstracting the physical hardware away from server location. Cisco hasn't gone as far as now defunct Liquid Computing (reviewed in the November 2009 Digital Issue, registration required), who used Non-Uniform Memory Access (NUMA) technology to distributed computing across any set of servers sharing CPU, memory and I/O as needed. But with this announcement, Cisco is making the networking more seamless and flexible.

The FEX-Link is the latest iteration of connectivity between Cisco's UCS blades and the network, and it really describes an architecture rather than a product. FEX-link is a fabric extension which uses low-cost physical hardware to interconnect a server to a Nexus 5000, which provides framing, forwarding, routing and other network services. For servers that are connected to the same Nexus 5000, network traffic travels from the blade through the chassis's Nexus 2000 FEX to an UCS 6100 to a Nexus 5000 and back again. As one representative put it, "FEX is a line card having an out-of-chassis experience."

Compared to other chassis-to-network architectures that put the switch functionality on the blade itself, allowing interswtich connectivity directly and leaving only up-links for traffic destined for servers or clients elsewhere, the FEX-link architecture seems overly-rigid. But since the Nexus 5000 is doing the switching, it can handle up to 384 10Gb or 576 100/1000 Ethernet servers in a single switched domain. To make this happen, Cisco has to ensure there is adequate capacity between the servers and the Nexus so that there are no bottle necks.

The capacity between the UCS blades and the UCS 6100 Series Fabric Interconnect has been increased to 160 Gbps--80 Gbps in both directions--an increase of 4x. The interconnects are also active/active meaning the Nexus 2000 can be connected to different UCS 6100s. The additional capacity doesn't require a new UCS5100 chassis, but will require new mezzanine interface cards that connect the blade servers to the UCS backplane. Also new are two new fabric extenders, the Nexus 2232 which as 32 10Gb FCoE ports and 8 FCoE up-link ports in a 1 RU top-of-rack form factor for a 4:1 over-subscription of access ports to up-link ports. The Nexus 2248 is a 48 port fabric extender supporting 100/1000 base-T copper switch ports for legacy equipment and four 10Gb up-links. Cisco is also trying to push the price down to from high speed 10Gb connectivity closer to $300 per port mark. The FEX Tranceiver, for example, is a specialized SFP+ interface that offers up to 100 meters distance at what Cisco claims is comparable to twinax cabling. Other hardware news includes a new MDS 9148 48 port 8Gbps Fibre Channel SAN switch. Switch ports can be added as needed to grow from in 16 port increments.

Buying into any single vendor networking and server solution can be limiting but also provides a measure of comfort that the interconnected products should work well together, minimizing management and integration tasks. Mixing and matching hardware to suit your needs can deliver a better overall architecture, but at the expense of having to do more leg work getting the all the pieces to play well together. For basic Ethernet connectivity, like physical interconnection, VLAN assignment, QoS enforcement, etc, those functions are pretty well defined and understood. But as data centers become more automated and malleable—which seems to be the case driven by server virtualization and more flexible computing demands—mixing and matching hardware, especially in the early days before standards work and interoperability testing is done, can be more difficult or even a non-starter. Given that hardware refreshes are in the three-to-five year time span and longer to rip and replace a network, making the right decision is critical.

You definitely want to investigate next generation networking offerings from Brocade, Cisco, Extreme, Force10, HP and Juniper and find out what those vendors plans are regarding supporting server and storage virtualization, storage connectivity and orchestration. You may not be deploying a fully virtualized data center today, but laying the groundwork now will save you much work in the long run when you do.

Read more about:

20102010

About the Author(s)

Mike Fratto

Former Network Computing Editor

Mike Fratto is a principal analyst at Current Analysis, covering the Enterprise Networking and Data Center Technology markets. Prior to that, Mike was with UBM Tech for 15 years, and served as editor of Network Computing. He was also lead analyst for InformationWeek Analytics and executive editor for Secure Enterprise. He has spoken at several conferences including Interop, MISTI, the Internet Security Conference, as well as to local groups. He served as the chair for Interop's datacenter and storage tracks. He also teaches a network security graduate course at Syracuse University. Prior to Network Computing, Mike was an independent consultant.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights