State Of Data Centers: Hot, Crowded, Virtual

With just 8% of respondents to our 2012 survey expecting to build new facilities and constrained budgets the No. 1 impactful trend, it's clear enterprise IT's transformation to service provider is in full swing.

Kurt Marko, Contributing Editor

May 30, 2012

15 Min Read

Assessing the state of your data center by looking at the building is like judging a car by its paint--you can throw a classic flame job on a 2001 Honda Civic, but you'll still have just four cylinders under the hood. Instead, focus on the performance the business needs now and what it will require in the near future-- remembering that gigabit-speed WANs and cloud services are now the norm and new co-location data centers let you rent the equivalent of a Shelby Mustang.

Every IT team needs to be skilled in weighing in-house data centers vs. various outsourcing options, including the cloud, and balancing the economies of standardized x86 servers against the performance of optimized systems. That's why this year we rebuilt our annual InformationWeek State of the Data Center Survey from the tires up to focus on application infrastructure rather than facility design.

Not that our 256 respondents, all of them involved with management or decision-making at companies with data centers of 1,000 square feet or larger, aren't still worried about their physical plants. Hardware packing more performance punch into every cubic inch has ratcheted up power and cooling demands to the point where simply renovating old facilities is often not an option. And big construction projects are harder than ever to justify, especially as the options for cloud infrastructure and colocating your gear in a third party's data center grow.

There's also a major operational shift afoot as the "customers" of enterprise data centers flex their muscles. Standardization is great, but who hasn't had a business unit specify that it absolutely, positively needs a software package that only runs on Sun Sparc or IBM Power? Or maybe the development team bought some specialized hardware and wants to move it into your data center.

This puts CIOs in a tough spot as they try to improve efficiency, lower costs, and optimize performance through a tightly controlled set of standardized hardware and software, while accommodating customized infrastructure. The rise of virtualization as the standard server platform for new applications, and the consequent use of commodity x86 hardware for an increasing share of workloads, means IT provides more value by efficiently buying and deploying "good enough" x86 servers than by painstakingly selecting "just right" hardware.

Money Matters

Respondents put constrained budgets at the top of our list of 16 trends that will have the greatest impact on data centers over the coming 12 months. Strapped IT teams are torn between standardization and efficiency vs. customization and performance. A primary goal of this year's survey was to learn at what level of the data center hardware and application infrastructure IT is imposing standardization. The OS? Application platform? Are purpose-built appliances worth supporting for high-value, frequently used applications like data warehouses and business analytics?

We need to come up with a guiding strategy here. Thirty percent of respondents spend less than 20% of their total IT budgets on data center facilities, hardware (servers, storage, networks), and operations; 60% spend less than 30%. Yet just 7% say they'll consider a colocation facility, even as 15% say demand for data center resources will rise by more than 25% in the coming year; 58% expect less-dramatic increases. Only 5% expect any decrease in demand.

Although most respondents have some hardware and business application standards that could help contain costs, they're not rigid. A quarter bend over backward to meet new requests, evaluating hardware and software on a case-by-case basis and picking the best fit available.

In terms of hardware, respondents are imposing limits: 63% of companies hosting their own applications use one or two vendors and a limited set of hardware. Only 29% pick the best hardware based on the application. Despite feverish marketing, x86 servers are a commodity, with any differences at the margin. If Dell introduces a new system with Intel's latest CPU and chipset, you can bet that Cisco, Hewlett-Packard, and IBM won't be far behind, meaning there's little room for white-box or second-tier vendors.

When it comes to software, our survey finds a similar reliance on standardization. Forty-two percent have a standard platform with no or limited exceptions. Twenty-three percent of respondents' organizations--and we're guessing midsize businesses make up a big chunk of this group--adopt one or two vendors' application stacks, say Windows Server and a core set of Microsoft enterprise software like Exchange, SharePoint, and SQL Server. Another approach is to pick a couple of vendors with broad product portfolios, say Microsoft and Oracle or IBM and SAP, and let application owners find the best fit within this menu of software choices.

Interestingly, just 31% take a best-of-breed approach, individually evaluating every request to find the best software, and few (2%) develop most of their own applications. One option that got zero takers: "We seldom buy new software; we encourage application owners to find the right SaaS product for their needs." Clearly, those with sunk data center investments aren't ready to turn users loose in the public cloud.

Research: 2012 State of the Data Center

Hot, Crowded and Standardized
Our full report on the state of the data center is free with registration.

This report includes 33 pages of action-oriented analysis, packed with 26 charts. What you'll find:

  • 16 top trends affecting data centers, rated by criticality

  • Discussion of where "data centers in a box" make sense

Get This And All Our Reports


Small Victories

So why all the interest in standardization? A primary goal is to eliminate repetitive work and overhead so we can free up resources for innovation, and in that sense, respondents are succeeding. While 67% of the IT budget is still spent on maintenance and operations vs. 33% on innovation--little changed from last year--it's still a victory for efficiency: 67/33 is the new 80/20.

We attribute this triumph to server virtualization, as 50% of respondents report that half or more of their production servers will be virtualized by the end of 2012; for an aggressive 9%, it's more than 90%. This is actually a lower figure than reported in our Virtualization Management Survey last summer, which found 63% of respondents expected to have half or more of their production servers virtualized by the end of 2011, a whopping 11-point increase from the prior year.

One engineer with an energy services company that's approaching 70% virtualization has been at it since 2004. More than 90% of its mission-critical Tier 1 applications rely on VMware virtualization and mirrored SAN technologies between two data centers, with service-level agreements of less than one hour in most cases. "We have had a 'virtualize first' policy over the last four years," he says.

But maybe "virtualization" is too imprecise a term, since our survey shows that in the enterprise, virtualization means VMware. Fully 53% of our respondents name VMware's management platforms, vCenter and vCloud Director, as their standard or preferred software stacks, leaving Microsoft's System Center in the dust with only 10%. Every other platform, whether from established virtualization companies like Citrix or new private cloud upstarts like the OpenStack spin-offs, stayed stuck in single digits.

There's still a need to up our game, however. While the extent of server virtualization is rising, the level of sophistication remains rather low. Specifically, virtual machines are often a one-for-one replacement for dedicated, standalone servers, rather than part of a private cloud self-service utility. Although two-thirds of respondents have private cloud plans, only 30% have made substantive progress building them. Just 8% are avoiding private clouds, with 25% in that wishy-washy "investigating" phase.

In sum, despite the vendor hype and IT lip service paid to private clouds, such tepid adoption suggests enterprise buyers are unsure what a private cloud really buys them.

We're not worried about private and hybrid cloud adoption--this is one technology where the more IT teams learn, the more they see the benefits. For example, 40% cite flexibility to meet new business needs as a top infrastructure requirement, a goal that should ultimately translate into greater interest in private cloud software stacks.

Yet the key is building a hybrid architecture with the right fuel mix of public and private cloud resources. Few companies are ready to ship everything off-site--only 4% say they run applications on an infrastructure-as-a-service platform whenever possible--but the public cloud has gained respectability in most IT shops. Fifteen percent put it among the top three factors that will most change data centers in the next year.

chart: What's your data center strategy?

Broken-Down Data Centers

One head-scratcher: respondents' reluctance to move privately owned and operated hardware to colocation facilities, many of which have state-of-the-art security, connectivity, and management. That's puzzling given the hot, crowded, power-starved state of many legacy raised-floor data centers--a situation that isn't likely to improve soon.

Why? Consider that hardware performance is once more the limiting factor on server workloads, so we're seeing more interest from IT operations in bigger iron. The problem is that, for the most part, chip merchants and server vendors have been using Moore's Law to make servers faster--not more energy efficient. Today's CPUs still have about the same thermal design power, usually in the 100- to 130-watt range, as those from two or three generations ago. Yet server designers have been quite effective at cramming more CPUs and memory sockets into smaller spaces. Translation: A high-end 1U server can easily burn 600 to 700 watts, while a loaded 6U to 10U blade chassis could use 4 to 5 kilowatts. It's easy for a single equipment rack to hit 20 to 25 kilowatts. This poses a problem for many legacy raised-floor data centers, where a typical design guideline called for about 100 to 200 watts per square foot.

And yet, almost two-thirds of respondents say they'll aggressively virtualize new servers within their existing data centers. (And did we mention that 62% of respondents make do with less than 5,000 square feet?) Even assuming your data center can deal with the power load--and that's a big assumption given that it takes only about 60 such racks to hit a megawatt, or enough to power 1,000 homes--can you cool it?

Aside from power and cooling, another problem with aging data centers in the era of hybrid clouds is providing adequate WAN bandwidth to meet the needs of more servers with data-hungry applications synchronizing to external systems. Granted, most traffic still runs within the edge network fabric, but plenty still hits the core en route to the WAN. Even assuming huge oversubscription ratios, a few racks of 10-Gbps servers can saturate a 1-Gbps pipe.

It's common for colocation facilities to have dozens or hundreds of 10-Gbps circuits to multiple carriers, but such scale and diversity aren't available everywhere, so any company planning to buy or build a new data center needs to make sure sufficient WAN capacity can be had--and if so, at what cost.

chart: How do you expect demand for data center resources to change?

Separate Facility From Contents

A better approach: Design the ideal application infrastructure--servers, storage systems, network equipment, and cloud architecture--and let that plan drive facility plans. Not convinced? Here's a quick rundown of current trends and the adverse consequences on your data center plant.

>> Servers: Despite predictions of its imminent demise, Moore's Law is alive and well in the server realm. Intel's recent Xeon updates served notice that the trend toward more power, cores, and memory will continue. If power and cooling are already a problem, don't expect relief from server vendors.

>> Mainframes and proprietary Unix: Despite the success of virtualization, mainframes and proprietary RISC systems aren't going away anytime soon. Yes, they're a niche for most of our respondents, with two-thirds running fewer than 20% of their applications on these platforms. But a very respectable 76% say specialized hardware will remain important in their data centers--28% plan to buy more. Although these systems don't have the density of x86 servers, their power and cooling demands are nonetheless substantial.

>> Hardware appliances: Virtualized general-purpose servers are the Swiss army knives of the data center, ready to run anything but not optimized for any particular workload. However, some applications--particularly data warehouses or business analytics with a steady stream of database transactions accessing very large data sets--can benefit from hardware tuning.

Adopters of database appliances like Netezza or Oracle Exadata or gateway products from the likes of IronMail (now McAfee EWS), Mirapoint, SonicWall, or Sendmail like the ease of deployment and all-in-one hardware/software support these systems deliver, and many believe they're also getting more bang for the buck. Yet those companies most attracted to fast deployments and appliance simplicity, particularly SMBs, might be better served with a SaaS product. Pay-as-you-go services running on scalable clouds offer many of the benefits of a dedicated hardware appliance without the capital investment and long-term hardware commitment.

>> LAN: Server virtualization and private clouds, and the resultant proliferation of virtual NICs, has brought about a reversal of typical data center traffic patterns. The majority, perhaps 80%, of LAN traffic is direct server-to-server, east-west traffic, and doesn't traverse the network core. The emphasis now is on big (lots of ports), fast (10 Gbps), flat (two-tier, fat-tree topologies), and low-latency fabrics. And vendors are responding: The Gnodal 40-Gbps Ethernet aggregation switch won the Best of Interop award this year. Cisco offers its FabricPath (Nexus 7000 and 5500 switches), Juniper its QFabric, Extreme its BlackDiamond, and Mellanox its SX-series.

Fortunately, 10-Gbps switches have gotten much more efficient, currently running at 3 to 5 watts per port, so the incremental load of top-of-rack switches, at 1 to 2 kilowatts, pales in comparison with the servers in the rest of the rack. However cabling, particularly if using small-form-factor pluggable fiber instead of the new 10GBase-T copper interconnects, can be both messy and expensive.

>> WAN: Given hybrid cloud application architectures and increasingly nomadic employees using mobile devices, WAN slowdowns or outages effectively mean application interruptions, no matter how reliable your private cloud server rack. Translated to infrastructure, that means big pipes to multiple carriers should be a design requirement when upgrading existing data centers or building new.

Colocation facilities serving hundreds of customers with thousands of servers have long recognized this and often have hundreds of fiber strands into their facilities from a dozen or more carriers. Many have started installing 40-Gbps WAN links and are talking to carriers about 100-Gbps circuits. Inside the data center, dedicated 10-Gbps drops to each customer are common. Some colocation sites even have direct links to cloud services like Amazon or Rackspace, meaning customer traffic to the cloud needn't traverse the Internet.

>> Storage: This may be the one area where technology is keeping up with demand while not exacerbating the space, power, and cooling problem. Solid-state systems are being deployed for high-throughput applications, while distributed, scale-out systems often displace monolithic storage arrays for high-capacity, low-performance needs. The net power drain on data center facilities is probably nil, because while solid-state devices have much less capacity, being electronic instead of mechanical, they also burn much less power. Still, these trends do make the environment more complex and put a load on edge networks.

chart: What are your long-term plans for hardware to run new applications?

No Clear Road Map

The unprecedented demands presented by today's hardware make it virtually impossible to get from here (older facilities) to there (a site capable of handling the requisite power, cooling, and WAN requirements) through remodeling. Yet few IT organizations have the resources to build a greenfield data center. What to do?

First, consider colocation instead of new construction. It's becoming less tenable to own and operate a data center, and those that persist in holding small facilities together with duct tape and a prayer are compromising performance and efficiency. Do a thorough cost analysis of everything required to operate what will eventually evolve into your private cloud: ventilation and cooling, power, UPS, backup generation, WAN fiber, data carriers, physical security. You may be surprised by just how expensive a do-it-yourself operation has become.

Second, develop a hybrid cloud strategy. Investigate the various software stacks, like OpenStack, CloudStack, vCloud, and Eucalyptus, with the goal of adopting a standard platform. For many, this will be based on VMware, but with hypervisors becoming a commodity, don't discount less-expensive and more open alternatives, particularly if you don't have a big investment in VMware's management stack. Then, let your private cloud strategy drive equipment standards. Some cloud systems, like OpenStack, are amenable to standalone servers with local storage, while others, like vCloud, may be easier and more efficient to implement on prepackaged hardware bundles like VCE Vblock, HP Matrix, or IBM SmartCloud. But beware of lock-in from single-vendor bundles, particularly those based on blade chassis, since you'll be stuck with the same vendor for at least two server generations.

The bottom line of any data center strategy is meeting new business application demands as inexpensively and quickly as possible. That means sensible standardization; efficiency through technology like virtualization, automation, and denser and faster hardware; use of public cloud and colocation services; and vendor selection and management that minimizes service and support overhead and lets vendors do more of your technology integration.

InformationWeek: June 11, 2012 Issue

InformationWeek: June 11, 2012 Issue

Download a free PDF of InformationWeek magazine
(registration required)

Read more about:

20122012

About the Author(s)

Kurt Marko

Contributing Editor

Kurt Marko is an InformationWeek and Network Computing contributor and IT industry veteran, pursuing his passion for communications after a varied career that has spanned virtually the entire high-tech food chain from chips to systems. Upon graduating from Stanford University with a BS and MS in Electrical Engineering, Kurt spent several years as a semiconductor device physicist, doing process design, modeling and testing. He then joined AT&T Bell Laboratories as a memory chip designer and CAD and simulation developer.Moving to Hewlett-Packard, Kurt started in the laser printer R&D lab doing electrophotography development, for which he earned a patent, but his love of computers eventually led him to join HP’s nascent technical IT group. He spent 15 years as an IT engineer and was a lead architect for several enterprisewide infrastructure projects at HP, including the Windows domain infrastructure, remote access service, Exchange e-mail infrastructure and managed Web services.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights