Wall Street Tech Elite Are Gonna Take Supercomputing To IT Main Street
What's in a <a href="http://www.informationweek.com/news/showArticle.jhtml?articleID=164902108">supercomputer</a>? Twenty years ago, the fastest machines in the world were specialty architectures designed by quirky geniuses like Seymour Cray. Today, the field's name has changed -- it's called HPC, for high-performance computing -- but it's still where the action is. That was definitely the deal Monday at the <a href="http://www.wallstreetandtech.com/blog/archives/2007/09/news_from_the_h.html">Hi
What's in a supercomputer? Twenty years ago, the fastest machines in the world were specialty architectures designed by quirky geniuses like Seymour Cray. Today, the field's name has changed -- it's called HPC, for high-performance computing -- but it's still where the action is. That was definitely the deal Monday at the High Performance on Wall Street conference in New York City.Since supercomputers are to desktops, or even enterprise blades, as Corvettes are to Ford Tauruses, you might wonder why you should care about them. (Sorry, I can't resist using the older term, though technically speaking there's a difference, according to Wikipedia.)
Attention must be paid because they're a leading indicator of the type of computing technology you're going to see in the mainstream, in two to three years time. I've been covering supercomputers on and off for 20 years, and what I saw at the conference convinced me that the pace of technology transfer is going to accelerate.
Indeed, given all the hubbub about multicore processors at the conference, I predict that the high-performance crowd -- or IT managers, developers, and programmers who've cut their teeth in that sector -- are going to be key providers of technical assistance to mainstream data centers from here on out.
The reason: These people are the only ones who are up to speed on, and have the technical smarts to take real advantage of, the new multicore processing architectures coming out of Intel and AMD.
Think about it. In the presentations I saw, panelists were talking about how to manage clusters with from 100 to 1,000 processors!
The HPC crowd is messing around with this stuff because they're charged with managing the most mission-critical applications around. Those are the real-time, advanced trading programs used by Wall Street.
However, unlike in the old days, when ultra-high-end processors were mostly unaffordable and locked away in an NSA back room, today everybody can access HPC. If you've got a bunch of blades, you can create your own cluster. And if you don't have the hardware on hand, you can rent compute cycles from IBM or Sun Microsystems.
Interestingly, and perhaps paradoxically, the challenge today remains the same as it ever was. Namely, it's very hard to create software which can take proper advantage of all the hardware.
OK, I've done a lot of telegraphing in the above post, without diving into the nitty-gritty details, which in any case you shouldn't try at home. Let me instead leave you with something useful. That would be a pointer to the three main tech areas you should familiarize yourself with. These are all things which were on the tips of everyone's tongues at the conference, and they will be very important in the next 6 to 36 months.
Grid Computing Originally a technology, then a cottage industry of startups (see my old but not too moldy grid article). Now, we're beyond the hype stage and grid is a commercial reality. Look for major activity in this arena in the coming year, with some of the smaller vendors becoming attractive acquisition targets. Infiniband This multi-gigabit-per-second communications link is the main means by which processors in large cluster pass messages back and forth. A really fast link is important here because of the hundreds (often, thousands) of CPUs in such clusters. Even with Infiniband, latency remains a big problem. Multicore The multicore processor architectures, from Intel and AMD, which I mentioned earlier, are by now so well known that you probably think of them as commonplace. That would be wrong, very wrong. You should instead think of them as the enablers of computing's future, which is here today. (Yes, Virginia, we live in very exciting times.) One thing panelists harped on at the conference, which I wasn't as aware of as I should be, is that both AMD and Intel have posted on their Web sites lots of heavy-duty technical training material available on their respective multicore processors and architectures. I'll add in links as I get them…
About the Author
You May Also Like