Supercomputers: New Software Needed

Next hurdle for high-performance computing is figuring out how to handle unstructured data.

Patience Wait, Contributor

December 31, 2013

4 Min Read

Top 10 Government IT Innovators Of 2013

Top 10 Government IT Innovators Of 2013


Top 10 Government IT Innovators Of 2013 (click image for larger view)

Supercomputing, in the broadest sense, is about finding the perfect combination of speed and power, even as the definition of perfection changes as technology advances. But the single biggest challenge in high-performance computing (HPC) is now on the software side: Creating code that can keep up with the processors.

"As you go back and try to adapt legacy codes to modern architecture, there's a lot of baggage that comes along," said Mike Papka, director of the Argonne Leadership Computing Facility and deputy associate laboratory director for computing, environment and life sciences at Argonne National Laboratory. "It's not clear to me what the path forward is … [the Department of Energy] is very interested in a modern approach to programming, what applications look like."

[From the bombing in Boston to the evolution of more sophisticated robots, here are some of our top government IT stories from 2013. Top 15 Government Technology Stories Of 2013. ]

Much attention has been given to rating the speed of supercomputers. Twice a year, the top 500 supercomputers are evaluated and ranked based on their processing speed, most recently in November, when China's National University of Defense Technology's Tianhe-2 (Milky Way-2) supercomputer achieved a benchmark speed of 33.86 petaflops/second. Titan, a Cray supercomputer operated by the Oak Ridge National Laboratory, which in June 2012 was No. 1 on the list, came in second at 17.59 Pflop/s.

That next level is exascale computing, machines capable of a million trillion calculations per second (an exaflop). HPC may achieve that level by 2020, Papka said, but before then -- perhaps in the 2017-2018 timeframe -- the next generation of supercomputers may get to 400 Pflop/s.

{image 1}

"If all the stars aligned, the money's there, and developers had the resources [by] combining Oak Ridge and Argonne, we have made the case that the scientific community needs a 400-petaflop machine," Papka said. "Vendors have work to do, labs have infrastructure to put in place -- heating, cooling, floor space. It's not just buying machines any more, you've got to have the software [and] applications in place."

One of the challenges to faster supercomputers is designing an operating system capable of handling that many calculations per second. Argonne, in collaboration with two other national laboratories, is working on the project, which is called Argo.

Tony Celeste, director of federal sales at Brocade, said another emerging trend in HPC is a growing awareness of its applicability to other IT developments, such as big data and analytics. "There are a number of emerging applications in those areas," he said. "Software now, networks in particular, have to move vast amounts of data around. The traffic pattern has changed; there's a lot of communication going on between servers, and between servers and supercomputers ... It's changing what supercomputing was 10, 15 years ago."

Other important trends Celeste identified include emphasis on having open, rather than proprietary, systems, and the growing awareness of energy efficiency as a requirement.

Patrick Dreher, chief scientist in the HPC technologies group at DRC, said the growing interest in HPC outside of the circles of fundamental scientific research, is driven by "demand for better, more accurate, more detailed computational simulations across the spectrum of science and engineering. It's a very cost-effective way to design products, research things, and much cheaper and faster than building prototypes."

Dreher's colleague, Rajiv Bendale, director of DRC's science and technology division, said the HPC community's emphasis is shifting a little away from the speed/power paradigm and toward addressing software challenges. "What matters is not acquiring the iron, but being able to run code that matters," Bendale said. "Rather than increasing the push to parallelize codes, the effort is on efficient use of codes."

Cloud Connect Summit, March 31 – April 1 2014, offers a two-day program colocated at Interop Las Vegas developed around "10 critical cloud decisions." Cloud Connect Summit zeros in on the most pressing cloud technology, policy and organizational decisions & debates for the cloud-enabled enterprise. Cloud Connect Summit is geared towards a cross-section of disciplines with a stake in the cloud-enabled enterprise. Register for Cloud Connect Summit today.

About the Author(s)

Patience Wait

Contributor

Washington-based Patience Wait contributes articles about government IT to InformationWeek.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights