My first actual panel for the opening day of the <a href="http://www.redhat.com/promo/summit/2008/" target="_blank">Red Hat Summit</a> sported the eye-grabbing moniker <strong>Why computers are getting slower (and what we can do about it). </strong>With a title like that, I was worried I'd be in for a fluff panel about spyware 'n viruses on Windows being performance killers, with Linux as the panacea for that. I couldn't have been more wrong, thank goodness.</p>

Serdar Yegulalp, Contributor

June 18, 2008

2 Min Read

My first actual panel for the opening day of the Red Hat Summit sported the eye-grabbing moniker Why computers are getting slower (and what we can do about it). With a title like that, I was worried I'd be in for a fluff panel about spyware 'n viruses on Windows being performance killers, with Linux as the panacea for that. I couldn't have been more wrong, thank goodness.

Rik van Riel, senior software engineer for Red Hat, braved multiple interruptions by a hair-trigger fire-alarm system to tell us how, as counterintuitive as it might seem, faster components may lead to slower systems. The size and speed of your hard drive hasn't kept pace with the amount of data being retrieved from it and, no thanks to Moore's Law, the age of what van Riel called "hardware miracles" is pretty much over. From here on out it's an arms race -- hardware advances being unable to keep pace with the increasing demands put on them.

So what's the solution to this mess? Parallelism alone won't fix everything -- you can throw more cores at a problem, sure, but at the risk of introducing context-switching overhead and other delays into the mix. Van Riel highlighted several key areas where users, application authors, and Linux kernel/OS devs can all make improvements. One, changes to the scheduler -- for instance, to consolidate threads to a single core during idle periods so that the unused cores can be powered down. This goes hand in hand with other recent kernel developments, like the "tickless" kernel, to reduce the number of times the kernel has to poke the system to see if anything's up.

The best thing about the panel was a lot of agreement from the audience -- many of whom were doing kernelspace work of their own -- that what seems like a performance improvement on the surface may turn out to just make things worse. If you throw more memory at a system via NUMA, for instance, you may simply be taking work that is handled best in a single block, scattering it, and creating whole new kinds of latencies. (Plus, on the open source side, if you deal with these problems out in the open instead of behind closed doors, the tide you raise will lift all boats -- even the ones you're not riding in.)

About the Author(s)

Serdar Yegulalp

Contributor

Follow Serdar Yegulalp and BYTE on Twitter and Google+:

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights