Google Sorts One Petabyte Of Data In 6 Hours - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Software // Information Management
Commentary
11/26/2008
02:49 PM
Roger Smith
Roger Smith
Commentary
50%
50%

Google Sorts One Petabyte Of Data In 6 Hours

According to last Friday's Official Google Blog, the Google Systems Infrastructure Team has sorted a record 1 terabyte of data on 1,000 computers in only 68 seconds, which breaks the previous mark of 209 seconds established in July by Yahoo.

According to last Friday's Official Google Blog, the Google Systems Infrastructure Team has sorted a record 1 terabyte of data on 1,000 computers in only 68 seconds, which breaks the previous mark of 209 seconds established in July by Yahoo.Team leader Grzegorz Czajkowski wrote that the team followed the rules of a standard terabyte sort benchmark and used Google's MapReduce software framework that supports parallel computations over large (multiple petabyte) data sets on clusters of computers. Yahoo's effort had featured a 910-node cluster, and used Hadoop, an open-source MapReduce implementation.

The sort benchmark, which was created in 1998 by computer scientist Jim Gray, specifies the input data (10 billion 100-byte records in uncompressed text files), which must be completely sorted and written to disk. Not content with just rewriting the record book, the Google team then decided to up the ante in sorting massive volumes of data. "Sometimes you need to sort more than a terabyte, so we were curious to find out what happens when you sort more and gave one petabyte (PB) a try," said Czajkowski. "It took six hours and two minutes to sort 1 PB (10 trillion 100-byte records) on 4,000 computers. We're not aware of any other sorting experiment at this scale and are obviously very excited to be able to process so much data so quickly."

One petabyte is a thousand terabytes, or roughly 12 times the amount of archived Web data in the U.S. Library of Congress as of May 2008. One way to put that amount in perspective, according to Czajkowski, is to consider that the aggregate size of data processed by all instances of MapReduce at Google was, on average, 20 PB per day in January 2008. A paper explaining MapReduce on the Google labs site says that the upwards of one thousand MapReduce jobs are executed on Google's clusters every day. So the infrastructure team's MapReduce job that extended the benchmark factors out to 50 typical MapReduce jobs, or one-twentieth the total of all daily MapReduce jobs run on Google's clusters.

As I reported a couple of months ago, Microsoft has its own strategy for sorting massive data sets, which I gleaned from reading a white paper presented at a database conference. All companies that operate Internet-scale cloud services have the need to store and process massive data sets, such as search logs, Web content collected by crawlers, and click-streams collected from a variety of Web services. Google, Yahoo, and Microsoft have developed their own systems that support parallel computations over multiple petabyte data sets on clusters of computers. While Google and Yahoo rely on the map-reduce programming model, Micosoft's Scope programming model intentionally builds on end-user knowledge of relational data and SQL. Microsoft's sorting strategy at this point appears to be primarily conceptual since, unlike Google and Yahoo, it hasn't competed in any recent benchmark tests.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Slideshows
Top-Paying U.S. Cities for Data Scientists and Data Analysts
Cynthia Harvey, Freelance Journalist, InformationWeek,  11/5/2019
Slideshows
10 Strategic Technology Trends for 2020
Jessica Davis, Senior Editor, Enterprise Apps,  11/1/2019
Commentary
Is the Computer Science Degree Dead?
Guest Commentary, Guest Commentary,  11/6/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Getting Started With Emerging Technologies
Looking to help your enterprise IT team ease the stress of putting new/emerging technologies such as AI, machine learning and IoT to work for their organizations? There are a few ways to get off on the right foot. In this report we share some expert advice on how to approach some of these seemingly daunting tech challenges.
Slideshows
Flash Poll