New 'MPG' Metric For Data Centers - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Software // Information Management
12:35 PM
Roger Smith
Roger Smith

New 'MPG' Metric For Data Centers

While data centers consume massive amounts of energy, most data centers have become massively more efficient over the past several years.

While data centers consume massive amounts of energy, most data centers have become massively more efficient over the past several years.This is an important point made in a new white paper, Energy Logic: Calculating and Prioritizing Your Data Center IT Efficiency Actions, by Emerson Network Power, which introduces a new metric to measure data center efficiency that Emerson hopes will have some of the same value that the miles-per-gallon (MPG) metric provides as an easily understood and agreed-upon efficiency measure for cars. Adding context to the improvements the computing industry has made in output and efficiency, the white paper notes, "While auto efficiency achieved a modest 0.8 percent compound annual growth rate, data center efficiency grew by 53 percent annually. If fuel efficiency had kept pace with data center efficiency improvements, the current generation of automobiles would average 163 MPG."

Specifically, the new Energy Logic white paper introduces the concept of CUPS, or Compute Units per Second, which is a relative measure of server output, based on average server performance in 2002. Using data from multiple industry sources, Emerson calculated the change in CUPS between 2002 and 2007, showing that while data center energy consumption has risen in recent years, the increases are overshadowed by dramatic gains in output and efficiency. "If data center output had remained flat between 2002 and 2007, the efficiency improvements achieved would have cut 2007 data center consumption to one-eighth the 2002 consumption." CUPS can be the numerator in the equation that determines compute efficiency for individual devices and/or data centers as a whole, with the power draw as the denominator, i.e. Compute Efficiency = CUPS/Watts Consumed. According to the white paper "Server efficiency, measured in CUPS/watt, grew 658 percent (7.6x) between 2002 and 2007. Data center efficiency, aided by infrastructure improvements, achieved even more impressive gains. CUPS/data center watt grew by 738 percent (8.4x) during the same period." Data center professionals can experiment with the new CUPS metric in an online data center efficiency calculator.

The white paper points up several shortcomings in the Power Usage Effectiveness (PUE) metric currently proposed as an interim measure of efficiency and makes a compelling case that any acceptable efficiency metric should meet at least three criteria:

1. It should drive the right behavior. 2. It should be available and published at the IT device level to help buyers make the right choice. 3. It should be fully-scalable from the IT device to the data center level.

"In some cases, [if management is driving a specific metric like PUE] this could force decisions that - though on the surface may save some energy - are clearly NOT the best ROI, are not the decisions that would save the MOST energy, and may in fact jeopardized performance (computational, storage, and network) or increase the likelihood of a significant failure," said Jack Pouchet, director of energy initiatives for Emerson Network Power. "Sadly, there are well-intentioned individuals and organizations pushing efficiency improvements at any and all costs. At the end of the day, saving $50,000 or even $100,000 a month in operating expense to then lose your data center for several hours or days, when downtime and recovery costs are over $1,000,000 an hour, is not a wise decision."

Asked whether the CUPS metric is consistent at the device level since the average server was single core in 2002 and the average server these days is multi-core, Pouchet responded "The fact that a server may have 1 or a dozen processors is less of an issue at the data center level as we are looking at raw computational performance. That can be achieved through clock-rate, topology, software, silicon, RAM, drives, network/comms, etc. in any combination and has been as we have moved along the design continuum. We can expect the next five years to equal the past by using similar techniques. The net result that we are attempting to capture is the significant increase in performance from all technologies and design practices within the IT equipment space."

Reiterating a point made in the white paper that the greatest efficiency gains can be achieved through faster replacement of high-density computing equipment (including power and cooling products), Pouchet said "The CUPS proxy curve can be developed similarly for networking and storage devices. We encourage the industry to bring forward their representative performance curves to enable data center operators to then make more intelligent hardware configuration, upgrade, and refresh decisions."

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Gartner Forecast Sees 7.3% Shrinkage in IT Spending for 2020
Joao-Pierre S. Ruth, Senior Writer,  7/15/2020
10 Ways AI Is Transforming Enterprise Software
Cynthia Harvey, Freelance Journalist, InformationWeek,  7/13/2020
IT Career Paths You May Not Have Considered
Lisa Morgan, Freelance Writer,  6/30/2020
White Papers
Register for InformationWeek Newsletters
Current Issue
Key to Cloud Success: The Right Management
This IT Trend highlights some of the steps IT teams can take to keep their cloud environments running in a safe, efficient manner.
Flash Poll