Google's Urs Holzle: Moore's Law Is Ending

Google's Urs Holzle says cloud suppliers won't be able to bank the gains of Moore's Law much longer and will have to eke out advances elsewhere.

Charles Babcock, Editor at Large, Cloud

November 10, 2016

4 Min Read
<p align="left">(Image: serg3d/iStockphoto)</p>

Gartner's 10 Tech Predictions That Will Change IT

Gartner's 10 Tech Predictions That Will Change IT


Gartner's 10 Tech Predictions That Will Change IT (Click image for larger view and slideshow.)

The cloud is currently an assembly of commodity technologies organized on a massive scale. However, it is on its way to becoming a mix of commodity and advanced, specialized technologies in order to sustain its ability to offer leading-edge performance.

The cloud needs more advanced, specialized technologies because Moore's Law is running out of steam, according to Google's Urs Holzle, the company's senior vice president for technical infrastructure and Google Fellow. The demise of Moore's Law, which once decreed that the number of transistors on a chip would double every 18 months, will occur sometime in 2021, according to the IEEE's Spectrum publication.

The growth in CPU power since Gordon Moore first announced his law in 1965 is what's allowed Google and other handlers of big data to continually improve performance.

However, that free ride cannot be relied upon forever.

If the growth curve of computing begins to level off, neither Google nor enterprise IT can allow data to keep increasing two to five times a year without experiencing increasing costs. If anything, machine learning, artificial intelligence, and business analytics will require a constantly expanding availability of compute without a matching increase in cost.

That was a vision that Holzle tried to convey to an audience at the Structure 2016 event, held Nov. 8 and 9 in San Francisco. Holzle, the first guest speaker at the conference, discussed the issue with Nicole Hemsoth, co-editor of The Next Platform, an online news source about high-performance computing.

"A surprising number of customers have a need for large-scale computation," he said. Suppliers, such as Google's Compute Platform, need to evolve to ensure that that capability is available, without the changes proving disruptive to end-users. The cloud supplier is in a better position to weave in new technology than every enterprise that's trying to solve an issue by itself.

"In the cloud, it's easier to insert new technology," he said. Some of the gains Google is considering will increase performance by 30%, instead of the 100% achieved by Moore's Law. But Holzle said Google must take the gains where it can find them.

"Infrastructure is one of those things, if you do it right, nobody cares about it," said Holzle. But getting it right, as the significance of Moore's Law fades, will be increasingly a challenge.

"Moore's Law is a problem for IT as well," he noted. IT inevitably has a growing amount of data and workloads. If the bill to process them grows at the same rate as the workloads, it will risk putting many enterprises out of business. "That gets people in trouble."

Greater use of flash memory and using the data movement within a server allowed by the OpenCAPI standard are two ways by which cloud suppliers will keep expanding the ability to run compute-intensive workloads.

The Open Coherent Processor Interface sits atop the PCI Express bus and moves data within the server at a speed of 25 Gbps. Its privileged position allows it to operate at the speed of random access memory, so expanding the capacity and speed of operation of the server.

Dell, HP Enterprise, IBM, and Google are all backers of the new standard. Google and Rackspace have expressed interest in buying next-generation Power9 chips that support the OpenCAPI standard for their cloud operations. Servers equipped with graphical processing units and CPUs could sit on the PCI Express bus and move data around the server at high rates of speed.

Urs_Hoelzle.jpg

Whether Google and Rackspace would use them for general purpose infrastructure or for data-intensive workloads isn't known at this time.

IBM is designing servers around Power9 that are due out sometime in 2017. It sold its Power chipmaking business to GlobalFoundries in 2014.

[Want to see how AWS relies on software-defined networking? Read Google's Infrastructure Chief Talks SDN.]

Also at Structure 2016, Facebook's Jay Parikh said the social media company has contributed the plans for its Backpack switch to the Open Compute Project. Backpack is a modular design that combines switch "elements" to scales from 40 Gbs up to 100 Gbs. It is considered a second generation in Open Compute's switch-creation effort.

"The Open Compute Foundation has received the Backpack specification," said Jay Parikh, Facebook's vice president of engineering, at the event Nov. 9. Facebook has used its Wedge and other switch designs in the construction of its own data centers. Equinix has also started adopting OpenCompute switches in its carrier-neutral cloud data centers.

The Open Compute Project makes hardware specifications available for anyone to use. Several large financial services firms, such as Goldman Sachs and CapitalOne, have been equipping their data centers with servers, racks, and switches that follow its specifications.

About the Author

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights