05:35 PM
Fred Langa
Fred Langa

Langa Letter: Your Next PC: Legacy Free?

IBM's PC AT has cast a shadow over PC system architecture for more than two decades. Thanks to advances such as Intel's EFI, PC vendors are on the verge of breaking the legacy bottlenecks. Kiss your BIOS goodbye.

Many people don't know it, but today's PCs--including the system you're using right now--contain elements that have hardly changed at all in the last 20 years. Yes, CPUs are faster, hard drives are bigger, and RAM banks are larger. But in many fundamental ways, your PC isn't very different from the PCs of two decades ago.

20 year old PCThink I'm exaggerating? Take a look at this almost-20-year-old image (left) scanned from the October 1984 issue of Byte magazine, which covered the rollout of the original IBM PC AT. If you've ever opened up your PC, the overall layout will instantly seem familiar, and you'll recognize many of the components. Note the power supply in the rear right corner, the floppies in the open bays on the right, the hard drive in the closed bay near the center, the system switches and speaker, and the card slots to the left. Experienced eyes will even pick out the BIOS chip, the battery backup for the BIOS, the RAM banks, the familiar-looking cables and electrical connectors, and more.

Although some of the system elements have been modified over time, almost everything in your PC is a direct lineal descendent of the IBM PC AT--a seminal design that still shapes PC architecture two decades later.

Stability--But Also Stagnation
In many ways, the PC's hardware consistency over time has been a good thing, a stabilizing force in the otherwise rapidly changing world of computing. It's been a huge positive for businesses and users because this consistency has made many peripherals completely interchangeable. For decades, we've been able to mix and match printers, keyboards, mice, monitors, scanners, modems, and more, largely without regard to the brand of PC.

Hardware standardization also has helped the bottom line by driving down prices: System and peripheral vendors have had a vast and uniform market from which to draw supplies, and to which to sell products, resulting in the commodity-level pricing that's behind today's amazingly low hardware costs. Overall, the PC AT's legacy has been an enormously positive one.

But it also has had a downside, principally in retarding innovation and slowing hardware advancements. The installed base--that is, the mass of existing, older, in-use hardware--acts like a giant speed brake on the computer industry because businesses and users are loath to give up older equipment that's still functional, even if newer designs would perform better or faster. As a result, new technologies tend to emerge piecemeal and more slowly than they would if hardware vendors could make a clean break with the past.

There's even a joke that made the rounds of the computing industry awhile ago: "Why was God able to create the universe in only seven days? Because he didn't have an installed base to deal with."

Despite this backward drag from the installed base, the Grail of many hardware engineers has long been a totally "legacy free" PC that can employ only fully modern, state-of-the-art, high-speed components and architectures. Such a PC would be faster, more compact, more reliable, and less expensive, as well as easier to manufacture and maintain.

Although partially legacy-free designs have emerged at various times, no one has yet put all the pieces together and produced a PC that traces none of its hardware architecture to the original IBM designs.

But that's about to change. As the PC AT's 20th anniversary approaches, some vendors are already working on totally legacy-free designs that will finally do away with even such fundamentals as the BIOS--the basic input/output system that has booted every PC ever made since the original IBM PC design in 1981. (Yes, some legacy components go back even further than the AT. We'll come back to this point in a moment.)

Let's examine the major legacy components: we'll see where they came from, what they're evolving toward, and what it means for you. Strap yourself in--it's going to be quite a ride!

Bye-Bye, BIOS
The original IBM PC, introduced in 1981, contained the seeds of the design that would later (in the IBM AT of 1984) come to dominate PC architecture. This image (below), also scanned from an old Byte magazine, shows a photo of the actual hand-wired prototype motherboard for the original IBM PC--one of only two such prototypes built. That rough-looking circuit board is actually the forebear of all PCs ever made, an artifact as important as, say, Bell's first telephone or Edison's light bulb or the Wrights' Flyer.

Original IBM PC design

The original PC's resemblance to today's PCs is less obvious than the AT's because some of the 1981 IBM PC's features--such as the port for connecting the PC to an audiocassette recorder for data storage--have fallen into obsolescence. But several other important components and features carried over to the 1984 AT design and have lingered ever since.

For example, even that first prototype PC had a BIOS that contained simple setup and diagnostic routines and controlled how the system booted and ran, exactly as it does in today's PCs.

To be sure, the BIOS has evolved over time. For example, unlike today's BIOSes, the original PC BIOS also contained a complete (albeit modest) software language, so users could do something with their PCs without having to load additional software from a slow cassette drive or from an expensive, optional floppy drive. This language was a version of Basic supplied to IBM by a then-little-known company called Microsoft.

Although the BIOS has evolved, it's still there at the heart of literally every PC ever made, an architectural component so deeply entrenched it may be the very last piece of the original IBM PC legacy to fade away.

But that's exactly what Intel's Extensible Firmware Interface technology/efi/ (EFI) project aims to do.

The Extensible Firmware Interface
The EFI is a tiny, secure operating system that sits between the hardware of a PC--or any computing device--and the high-level operating system (like Windows or Linux) that humans normally interact with. Although the EFI can emulate a traditional BIOS, it also can do much more. For example, it can provide a full mouse-driven graphical interface for controlling the low-level hardware functions that today can only be controlled by hitting a special key at startup and entering a limited, arcane, and text-only "BIOS Setup" routine.

But that's only the beginning, because EFI is really a kind of blank slate that will allow a total rethinking of how computers start up. For example, a traditional BIOS is space-limited, so most are programmed in compact, low-level "machine language," which is notoriously difficult to do well--in fact, very few engineers are proficient in machine language. In contrast, EFI is written in C, the world's most popular high-end programming language, and EFI isn't space-constrained because its data resides in a special reserved area of the hard drive. This means that far more engineers will be able to do more creative things with PC hardware than is now possible. There's no telling where EFI will lead, but it almost surely will initially result in new forms of system maintenance, repair, and recovery tools; new ways to install operating systems and peripherals; and more.

For example, at last year's Intel Developer Forum, Intel's Yosi Govezensky reeled off several likely EFI near-term spin-offs:

  • Portable, operating-system-neutral disk-management and boot-management tools
  • Remote configuration and installation options
  • Platform management utilities outside the operating system, such as a bootable flash update CD without DOS, customer-support utilities, and country (regionalization) kits.

Intel's EFI project began in 2000, and many companies, including Adaptec, AMI, ATI, Hewlett-Packard, LSI, Microsoft, and PowerQuest, are working with Intel to make EFI a production reality. EFI is steadily gathering momentum and is definitely a technology to watch. More info is available at technology/efi/efi.htm .

The original base model IBM PC had only a keyboard port, a cassette port, a TV (yes--a TV, not a monitor), and a printer port as standard equipment, although you could add monitor, game, and serial communication ports as options. Over time, these became standard gear and part of the load of "legacy" hardware.

Mice were rare on early PCs, because those PCs mostly ran in DOS text mode. When mice appeared, they usually connected to an existing serial port or to a special proprietary mouse port built into a graphics adapter card.

Eventually, with the introduction of the PS/2 line of PCs in 1987, IBM changed the AT's large keyboard plug to the miniplug that's in common use today. (See this page keyboard.html for examples.) Mice followed thereafter, although it would be almost 10 years before most systems shipped with only the PS/2-style mouse and keyboard sockets.

But even that wasn't a true move to legacy-free design, because all it really did was change the size and shape of the connectors. And meanwhile, serial/communications, printer, and game ports hadn't changed much at all.

Then USB entered the scene--and the universal serial bus was a true break from classic PC AT design. Fast, flexible, and extensible, USB could--in theory--handle almost any kind of serial input, including mice, keyboards, printers, modems or other communications, and game device input.

But the move to USB has been hampered by several factors. USB devices may work poorly or not at all on older PCs, and, more importantly, the huge installed base of non-USB peripherals has made the change slow going.

For example, Phil Osako, Gateway's director of product management, says one reason for the slow change is "the large number of customers with legacy ergonomic keyboards, optical mice, and trackballs" who won't want to switch to USB input devices until their older devices are no longer serviceable. There are similar issues with printers, scanners, and other peripherals.

Hewlett-Packard's Brian Schmitz agrees: "Customers understand the benefits of legacy-free, but many are anxious about needing backward compatibility. HP addresses this by providing optional serial/parallel/PS/2 connectivity for those users needing it. This need is diminishing over time."

But it's diminishing only slowly. Osako thinks it will be at least "a couple more years" before we see the last of the legacy ports. By way of example, he points specifically to PDAs and GPS devices "Langa Letter: A Real-Life GPS Road Test," many of which are still coming to market with standard legacy serial ports for connecting to or syncing with PCs. Although third-party adaptors can let these devices connect to USB systems and vice versa (see "USB-To-Anything"), USB won't be able to completely replace legacy ports until peripheral vendors stop shipping legacy-based devices.

Slots, System Buses, And PnP
A PC's slots--the electrical connectors (and the buses they are part of) that are used to plug in add-on circuit boards or cards--are one of the areas that's changed the most since the original PC.

Initially, a PC's slots were "dumb" devices, conceptually little different from a household wall socket, except with more conductors so the system could move 8 bits of data at a time (an "8-bit bus"). Early PCs had to be configured manually, usually by manipulating tiny hardware switches on both the motherboard and the add-on cards, and also via software settings. This wasn't too bad at first, but as PCs become more popular and more add-on devices became available, getting all the hardware to work together became somewhat of a black art.

Over time, the PC's slots and buses gained more connectors, capable of moving 16 and eventually 32 bits at once, and they also became software configurable. By the mid-1990s--and largely at the behest of Intel and Microsoft--PC BIOSes learned to cooperate with the operating system and vice versa so that "plug-and-play" add-in cards could be detected and set up more or less automatically. Windows 95 was the first mass-market PC operating system to support plug and play (PnP), and from that point onward, PCs began to become self-configuring.

But PnP got off to a rocky start. (Wags referred to it as "plug and pray.") PnP could work quite well on PCI-based systems (Peripheral Component Interconnect encyclopedia/defineterm?term=pci ), but PCI was still a relatively new technology in 1995. Most of the installed base of PCs in the mid-1990s were either partially or entirely based on older bus standards such as ISA encyclopedia/defineterm?term=isa or EISA encyclopedia/defineterm?term=eisa. These standards worked less well, or not at all, with PnP.

Surprisingly, it's still possible to find systems today--even brand-new ones--with these older architectures search?q=isa+motherboard. But fortunately, all the major mainstream vendors have now moved fully to PCI. Today, PnP works mostly as it should, and all modern operating systems support it. See, for example, Microsoft's PnP pages here hwdev/tech/PnP/default.asp or various Linux PnP how-tos here search?q=linux+plug+and+play.

PCI technology hasn't stood still; it's evolved somewhat since its introduction. For example, in 1997, Intel introduced AGP--the Accelerated Graphics Port is basically a variant of PCI architecture designed specifically for high-speed graphics cards.

Other current PCI variants include Mini-PCI (used mostly in notebooks) and PCIx encyclopedia/defineterm?term=PCIx (intended for high-bandwidth applications). See this page help/bus.htm for a good third-party overview of all major PC buses.

Although these standard forms of PCI are unlikely to vanish any time soon, the successor technology is already in development. It's called PCI Express technology/pciexpress/, and it's another Intel initiative, albeit with wide industry interest and support (

Beyond Conventional PCI
Intel calls PCI Express a third-generation technology. (ISA was the first generation, and PCI the second.) In fact, the original name for the new technology was 3GIO, for "Third Generation Input/Output."

3GIO--or now "PCI Express"--is a radical break with the system buses of the past, which gained power mainly through increasing parallelism--moving data 8, 16, 32, or more bits at a time. Although even more massive parallelism is possible, it's hard to manufacture and implement economically because of all the separate pins, connectors, wires, and traces, and because timing issues get harder and harder to manage as speeds climb.

PCI Express is designed to overcome these problems. It's actually a high-speed serial bus that only requires two wires or traces to complete a circuit and yet can run at extremely high speeds--up to 10 GHz, as opposed to the approximate 1-GHz practical limit for conventional parallel bus models.

But it's called PCI Express for a reason: Intel's spec defines what's mainly a hardware change that will result in simpler motherboard and peripheral designs, but that still will use the classic PCI driver model on the software side. This means that any operating system that works with PCI--and that's essentially all current operating systems--will also be able to work with PCI Express.

Interest in PCI Express is reaching critical mass with a worldwide series of technical seminars scheduled for April ( events/expresstour ). Intel predicts that initial PCI Express designs for commercial components will be ready for testing later this year technology/3GIO/downloads/ PCI_EI_PCB_Guidelines.pdf .

When it's finally productized, PCI Express will probably first appear on high-end servers, then migrate downward. When that happens, and when it combines with the other advances mentioned elsewhere in this article, a whole new class of PC will emerge.

We'll come back to this in a moment, but there are several other legacy components to look at first.

Moving Hard Dive Data
If you look again at the photo of the original IBM AT we presented earlier, you'll note that the hard-drive cables and connectors look very familiar. Although the AT's hard-drive controller was on a separate plug-in card rather than being built onto the motherboard, the basic hard-drive bus technology was similar to what's still in use today: A wide, parallel-conductor ribbon cable carried the data between the drive and the controller. In fact, all of today's ATA drives are called that because they use this same basic legacy-based "AT Attachment" technology you can see in that 20-year-old photo.

The original AT drives had a theoretical maximum data-transfer rate of 4.2 Mbytes per second; today's top-of-the-line ATA-133 drives have a theoretical maximum data-transfer rate of 133 Mbytes per second. But with that, there's not a lot of room for growth--the classic ribbon-cable, parallel-conductor ATA bus has just about reached its practical maximum.

The next step in hard-drive bus evolution, and a break from the classic ATA legacy, is something called "serial ATA." In a way analogous to what's happening with PCI Express, serial ATA replaces complicated parallel circuitry with a much simpler--but faster--serial design using just four conductors instead of 80. It's a spec that should initially deliver a theoretical maximum of around 150 Mbytes per second and ramp to 600 Mbytes per second in the next five years or so.

The first fully productized serial ATA products hit the market early this year (for example, see the Seagate Barracuda ATA V drive products/discsales/personal/family/0,1085,564,00.html). But, as with any first versions, there are some problems. For instance, one authoritative review of early serial ATA drives and controllers storage/20030204/ shows that serial ATA technology isn't yet faster than top-of-the-line, high-speed, conventional ATA drives. However, even the very first mass-market serial ATA drive was notable for being quieter and cooler-running than its classic ATA counterparts, and there seems to be no doubt that drive and controller speeds will climb as new designs emerge and the technology matures.

Floppy Drives
Floppy drives have been a part of PCs from day one. The original IBM PC could optionally be equipped with two single-sided, low-density 5.25-inch floppies that each held 360 Kbytes of data. Over the years, floppies became two-sided, their data density increased, the overall size shrank, and they ultimately became the hard-shelled (and thus no-longer "floppy") 3.5-inch, 1.44-Mbyte media in common use today. But the floppy drive has nearly reached the end of its useful life.

With today's large drives, floppies are a joke as a backup medium; in fact, with today's large file sizes, it's not uncommon for floppies to be unable to hold even a single complete file. Instead, USB-based key-chain- or pen-style "flash drives" ( search?q=usb+flash+drive), really just a memory chip with some USB input/output circuitry, have become enormously popular for the simple "sneaker-net" file transfers that used to be handled by floppies.

There are perhaps only two remaining serious uses for floppies: One is for booting to DOS or Linux for low-level work on a PC. There's still some utility in this, although most current PCs can boot from a specially formatted CD search?q=boot+cd. A boot CD can do everything a boot floppy can (except accept new data) and can hold vastly more software for setting up an operating system, performing diagnostics, or otherwise working on a PC at a low level.

The other major remaining use for a floppy drive is to access old data and files archived on floppies. This sounds more important than it really is, because floppies are ephemeral--they self-demagnetize over time, and they physically degrade as their plasticizers and chemical binders decay. Floppy life is self-limiting: After a while, archived floppies become unreadable anyway, eliminating the need for drives to read them (see Is Your Data Disappearing?).

As a result, all the major PC makers are beginning to offer floppy-free desktop units, often with a modest price rebate ($5 or so) as an incentive for buyers to accept the no-floppy option. It's slow going--only a fraction of buyers opt for the floppy-free versions of desktop PCs--but that will increase with time, just as it already has in the significantly floppy-free laptop market. It's mainly a matter of customers getting used to the idea of not having a floppy and getting more used to alternatives such as USB flash drives and boot CDs.

What's Here, What's Ahead
Clearly, the move to legacy-free design is well under way, and, piece by piece, step by step, we're moving toward the day when we'll see PCs using only fully modern, state-of-the-art high-speed components and architectures.

Some vendors are content to move with the market and let the evolution happen at its own pace. For example, Micron PC's Kelly Sasso said that legacy-free design is "really a nonissue for us at this point in time. It isn't an area that we are focused on."

But other vendors are already offering PCs today that employ the legacy-free technologies that have come to market. For example, consider the Compaq Evo D510 e-PC (below). It's a largely legacy-free, extremely compact (4 inches by 10 inches by 12 inches, 10 pounds) sealed box that's small and light enough to hang on a wall instead of occupying floor or desk space. Its primary connectivity is provided by no fewer than six USB ports. There's no floppy, no legacy parallel or serial ports, and no slots inside the box; indeed, there's no real reason to open the box at all.

A related model, the D510 Ultra-slim desktop doesn't go quite as far, and isn't quite as small (3 inches by 12 inches by 13 inches, 11 pounds); it has five USB ports and one PCI connector inside, occupied by a modem. But both systems show one likely direction in legacy-free design: extremely compact form factors with little or no need for access inside the box.

Compaq EvoAnother possible outcome of legacy-free design may be true modular PCs whose components simply plug together, Lego-style, to meet a variety of computing needs. This will become increasingly possible with the advent of high-speed serial buses such as PCI Express and Serial ATA that will allow separate components to plug together with remarkable simplicity.

Even now, using the slower bus technology of today, some vendors are experimenting with exotic, modular form factors. For example, one largely legacy-free hybrid modular design is the "Modubility" PC (, whose central unit is a palm-sized, 8-ounce "information module," containing CPU, memory, hard drive, and operating system. This tiny central device uses wireless technology to connect to various "access modules" and display devices.

Indeed, as system designers are freed of the constraints of the past, we'll likely see radical PC designs that will not only be faster, smaller, and better than today's designs, but that will make the traditional beige-box PC seem positively antiquated. And I, for one, can't wait!

Fred Langa ([email protected]), the former editor in chief of Byte magazine, is an InformationWeek columnist.

Additional reading:

Legacy-Free/Legacy-Reduced Design hwdev/platform/pcdesign/LR/default.asp

Legacy-Free Hardware And BIOS Requirements hwdev/platform/PCdesign/LR/LfP.asp#BIOS

Legacy I/O Removal To Advance The PC Architecture hwdev/archive/newPC/legacyIO.asp

Bus Technologies Overview hwdev/bus/default.asp

ACPI/Power Management hwdev/tech/onnow/default.asp

Fast Boot/Fast Resume Design platform/performance/fastboot/default.asp

IBM PC And AT History history/history/decade_1980.html

General Search: "Legacy Free" search?q=legacy+free

Antique Computer Virtual Museum suprdave/classiccmp/ccidxa2z.htm

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Email This  | 
Print  | 
More Insights
Copyright © 2020 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service