U.S. Regains Top Supercomputer Spots
But some scientists warn the government needs to do even more
Two years after losingtheir technical lead in the supercomputing race, U.S. manufacturers reclaimed preeminence in the field last week, as systems IBM and Silicon Graphics Inc. designed for government contracts were named the world's fastest.
IBM's Blue Gene/L, being installed at Lawrence Livermore National Laboratory in California, is now the world's fastest computer, capable of a staggering 70.72 trillion computations per second. That's double the capacity of the previous fastest system, the Japanese government's Earth Simulator, installed by NEC in Yokohama Japan in 2002 and capable of sustaining 35.86 trillion floating point operations per second. The Earth Simulator, which stunned U.S. technologists when it claimed the mantle of world's fastest, slipped to No. 3 on a closely watched list of the world's 500 fastest supercomputers assembled by a group of computer scientists.
No. 2 on the new Top 500 list (www.top500.org) released last week is SGI's Columbia system at NASA's Ames Research Center in Silicon Valley. That system can sustain 51.87 teraflops. The results were achieved on the Linpack benchmark, which involves solving a complex series of mathematical equations, and unveiled at a supercomputing conference in Pittsburgh.
"This is really certifying the vitality of the American computer manufacturers," says Thom Dunning, the incoming director of the National Center for Supercomputing Applications in Illinois. "It's important to note that it was done by two vendors using two very different paths." IBM's system uses more than 32,000 embedded processors designed for low power and fast, on-chip data movement, whereas SGI has built fast interconnections between more than 10,000 Intel Itanium processors. Both approaches could yield more affordable computing power.
SGI's Columbia system at NASA's Ames Research Center can sustain 51.87 teraflops. |
U.S. technology is progressing rapidly. IBM says it's on track to quadruple the size of the Blue Gene/L system it's assembling for Livermore to achieve a benchmark result of 360 teraflops by next year. And U.S. manufacturers could reach a petaflop of performance--one quadrillion operations per second--by 2008 or sooner.
But some computer scientists are concerned that U.S. supercomputing is in danger of slipping behind again. A panel of experts convened by the National Research Council last week released a report prepared for the Energy Department that called for the government to increase its funding for high-performance computing to $140 million per year, nearly $100 million more than it's estimated the government spends on the field today. The panel warned in the report that the government needs to make a long-term investment in high-performance computing to give U.S. scientists the best tools for fields, such as nuclear weapons stockpile stewardship, intelligence, and climate research.
The United States isn't investing enough in software-programming tools and algorithms that can keep pace with the hardware systems being assembled, says Jack Dongarra, a computer-science professor at the University of Tennessee who also served on the committee and who helps assemble the biannual Top 500 supercomputer list. Says Dongarra, "We're using a programming paradigm--Fortran and C--that's been around since the '70s. You end up investing more in programming."
About the Author
You May Also Like