Last week's <a href="http://www.informationweek.com/news/windows/operatingsystems/showArticle.jhtml?articleID=224600157">McAfee meltdown</a> showed that there is danger in automatic security signature updates. Yet as much as it would be easy to just blame McAfee, what this incident really shows is the insanity of the current security process.

Dave Methvin, Contributor

April 25, 2010

3 Min Read

Last week's McAfee meltdown showed that there is danger in automatic security signature updates. Yet as much as it would be easy to just blame McAfee, what this incident really shows is the insanity of the current security process.Initially, there's what I call a "malware detection honeymoon period" for a lot of users, where they have complete faith in what their security software tells them. Inevitably the software starts popping up warning dialogs, and they dutifully follow the suggestions to stop the "dangerous activity" that the scanner reports. Just last week I got a report from a user who was incensed that our company was trying to send them a virus; they knew we were guilty because their off-brand virus scanner said so. I checked the file with twenty-two different virus scanners and found it squeaky clean, but no -- they had faith in their scanner and something must be wrong with that file, and with our company.

Over time, though, users realize that their scanner isn't always right. When they always take its advice, their applications don't install or can't communicate with the Internet. After too many instances of security software crying "wolf", users change their attitude to skepticism; that makes them more likely to distrust and override its advice. At that point, it's a crapshoot whether any security software that gives the user veto power can offer effective protection.

Add to those sins the problem that happened with McAfee this time. An erroneous detection had McAfee's scanner deciding that an innocent svchost.exe file was malicious. McAfee removing the virus, er, critical system file, rendered it unbootable, which is a problem that can't be fixed without a face-to-face encounter with each PC. McAfee realized their mistake pretty quickly, although only after the fact.

This most recent episode does bring into question whether McAfee quality control and testing is good enough, and their answer darned well should be "no". Yet it isn't like this is the first time a problem like this has happened. False positives don't have to be very common to be catastrophic -- to the PC, to the IT department, and to the product's credibility. Is the industry's current approach really sustainable? Malware seems to have been able to stay ahead of security software for more than a decade; the good guys are constantly being reactive to threats and leaving open a window of vulnerability that is hours or even days long.

Given the flaws of the current system, is there any alternative? A few options do exist out there. For example, instead of depending on security scanners to find the malware needle in the software haystack using a blacklist, some products like Bit9 use a whitelist to only allow approved programs to run. In some environments this can be a much better approach. In a setting where users need to run arbitrary software on PCs, such as software development, it's not practical. Yet many offices really can make a short list of the software they want their users to run. Is yours one of them?

About the Author(s)

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights