If you're like most of us, you probably lost time and productivity recently in weeding out dozens, hundreds, or even thousands of worm-generated E-mails; or in having to filter and dump the bogus "we detected a virus in your E-mail" auto-reply messages that the worm triggered as a secondary effect (see "The Danger In Auto-Reply Messages").
InformationWeek's Editor in Chief Bob Evans was affected by it, too, and that caused him to focus several articles on the topics of hacking and security, including a column called "Secure Computing Must Move To The Front." If you haven't yet seen that item, please check it out now; it'll be well worth your time, and will help put the rest of this column in context.
You probably share Bob's anger and agree when he writes that acts of cybervandalism "...are illegal, they are dangerous, they are costly and cowardly, and they must be treated as such, which means that the agents behind these acts need to be rooted out, prosecuted to the fullest extent of the law, and punished."
But the cybercriminals aren't the whole story, and Bob goes on to address software publishers: "It is time for ... vendors to accept their own responsibility to toss out flawed development strategies, to stop viewing patches as upgrades, to cease with the evasive language that attempts to ascribe blame everywhere but on themselves. It is time for Microsoft in particular to step up to its promise of 'trustworthy computing' so boldly proclaimed many months ago by Bill Gates himself..."
Indeed, Microsoft is at the heart of our current online security woes; there are real and systemic problems with Microsoft's software development process. For example, consider buffer overruns, which can be exploited to stuff hostile code into a PC. It's easy for buffer-overrun vulnerabilities to happen--they're one of the most common types of programming error. But buffer-overrun problems have affected Microsoft software time and again across the years and across multiple Microsoft product lines.
If these buffer overrun issues were isolated cases, that would be one thing. But the sheer number and persistence of this kind of problem in Microsoft software suggests to me there's a fundamental blind spot in Microsoft's corporate programming practices, and a glaring and obvious hole in their quality assurance strategies.
Obviously, these buffer problems can be found and patched after the fact--buffer-overrun patches make up a huge percentage of Windows Update items. Why can't they be found and patched beforehand? How hard would it be for the world's largest desktop-software company to establish internal programming standards to ensure that all input buffers in all Microsoft products are protected by security routines? Or to check that whatever data is entering the software is of the type, length, and format that the software expects and can handle?
Microsoft might answer, "it's very hard," but many security issues in Microsoft software are discovered after the fact by small companies and one-person shops. Somehow, these small companies and individuals manage to do what Microsoft itself cannot--or rather, will not, do. Surely, a flaw that can be discovered by some lone programmer working in his basement ought to be able to be discovered by the world's largest desktop-software company.
So clearly, there are very real problems with the way Microsoft builds and tests software, and no amount of white papers or PR spin or windy speeches will change it. What it will take is for Microsoft to be far more aggressive in reviewing existing code; and far more rigorous in testing new code. I don't know if Microsoft is up to the challenge. They could and should be; they surely have the resources. But a long--and I mean long!--history of the same programmatic errors showing up again and again in Microsoft software suggests that something in Microsoft's corporate structure is preventing positive change.
But Wait, There's More
Microsoft's shortcomings are real, but are only part of the problem in desktop security. There also are factors involving human nature and market forces--which is to say, involving you and me--and all these factors have to be considered as part of the solution.
For example, consider the simplistic argument "Dump Microsoft--switch to [name of favorite alternate OS here]." Today, Microsoft software is ubiquitous. It's a fat, easy target for crackers and other miscreants, especially those who seek public notoriety or the acclaim of their fellow crackers: By targeting the software with the largest market share, malicious coders are guaranteed a huge pool of potential victims, thus amplifying the effect of whatever harm they can do. If the market were different--say, if Linux were top dog--then it would receive far more hostile attention than it does today, and Linux's weaknesses would be in the limelight. (All software contains at least some flaws and coding errors (see "Linux Has Bugs: Get Over It"). Switching vendors in and of itself won't eliminate security problems because malicious hackers will simply target the new top dog.
A related issue is the "newbie factor." Because marketshare-leading Microsoft software comes bundled with most new PCs, there's a higher percentage of newbies using Microsoft's products than any other vendors'. This helps malicious coders because these newbies can be relied upon to do the wrong thing. For example, the recent Blaster worm infected tens of millions of PCs, but it did so only because these PCs were all running without even the most basic security measures--the operating systems weren't properly patched, didn't have a decent desktop firewall, and were running without a good antivirus tool. Any one of those three precautions would have stopped the Blaster worm in its tracks, but clearly, huge numbers of users still are running their PCs wide open and unprotected.
Newbies will err, no matter what operating system they use, and any long-term solution to improving desktop security has to allow for the "newbie factor." This isn't a Microsoft problem per se. In fact, I think it's safe to say that a mass migration to Linux would make things worse, at least for a while: Linux has many strengths, but newbie-friendliness isn't one of them.
To solve the newbie problem, an operating system has to be safe enough out of the box to foil at least the most basic kinds of attacks, but still has to be easy enough so unskilled users can connect to a LAN or the Internet without undue trouble. That's a tough balancing act, but several vendors are getting close. For example, Red Hat Linux offers very simple auto-configuration of its firewall, and Microsoft includes a simple click-to-activate firewall in XP.
But that points out another problem affecting security: How do you get people to move to new software? For example, Microsoft has twice tried to kill off Win98--a five-year-old operating system that itself was mostly a refinement of the eight-year-old Win95. But customers howled: "We want our old software!" As a result, Microsoft has twice extended the life of Win98; active support now will continue until January 2004, and Microsoft won't completely pull the plug on Win98 until January 2005 (see "Microsoft's Adjusted 'Product Lifecycle' Plans").
When Microsoft finally retires Win98, the core of that operating system will be 10 years old. Think of what the computing world was like then: Computers were nowhere nearly as common as they are today; and most computer users had never natively surfed the Web or directly navigated the Internet. What worms and viruses existed then mostly traveled hand-to-hand, by floppy disk!
Microsoft's corporate blind spot about things like buffer overruns may be inexcusable, but I also think it's unreasonable to expect any decade-old software to deal with threats that mostly didn't exist at the time the software first appeared. A 10-year-old copy of Linux also won't look very good compared to today's versions, for example; a 10-year-old Mac will likewise look pretty lame. No operating system from eight or 10 years ago is really up to all the challenges of today's needs.
A Two-Part Solution
To me, it appears that the problem of security on the desktop will require two simultaneous changes. First, all software vendors--but most especially Microsoft--have to heed Bob Evan's call (see "Secure Computing Must Move To The Front") and step up to the challenge of producing code that actually delivers a high level of intrinsic security.
But second, we have to do our part. In the short term, that means deploying a desktop firewall and an antivirus tool on every PC, and keeping all PCs up to date with existing security patches--no excuses, no griping about cost, no finger-pointing. It simply has to be done. Yes, the costs are real, but so are the payoffs: These steps, by themselves, yield acceptable levels of security even with current software products, and totally prevent problems like the Blaster worm.
Looking further ahead, we all have to be open to change. We must be willing to abandon older software so we're not dragging along decade-old problems and inadequacies into new generations of software. And we need to vote with our dollars and reward vendors who deliver--and not just talk about--secure software.
But what's your take? Is responsibility for security shared between vendors and end users, as I suggest, or is it mainly a vendor problem? If someone said, "I can give you virtually hacker-proof software, but it will require that you toss all your current software," would you do it? Would your company? Do you prefer an incremental approach to improving security, even if that takes longer? What steps do you currently take to keep safe your own PC and the PCs you're responsible for? Join in the discussion!