Open Source Code Auditing By Design, Not Happenstance - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Government // Enterprise Architecture
Commentary
8/28/2008
02:23 PM
Serdar Yegulalp
Serdar Yegulalp
Commentary
Connect Directly
Google+
Twitter
RSS
E-Mail
50%
50%

Open Source Code Auditing By Design, Not Happenstance

If there's any one thing you hear said consistently about open source, it's the security benefits. My take: given how much we depend on software, we need to stop assuming open source = secure, and take steps to make sure that happens. Here's one idea how.

If there's any one thing you hear said consistently about open source, it's the security benefits. My take: given how much we depend on software, we need to stop assuming open source = secure, and take steps to make sure that happens. Here's one idea how.

Before I start, though, go read a piece by my colleague Robert Hansen, "Atomic Paradigms for Enterprise Security" over at Internet Evolution. It's well worth your time, but the gist of the piece is simple. With security (Rob sez), we're being forced to become either Oppenheimers (disclose security issues slowly and responsibly) or Tellers (disclose them early and often), with no middle ground.

That piece echoes another sentiment I've heard -- I forget who said it, unfortunately: Technology is neither good nor evil, but it is not neutral, either. It assumes the shape of the moral container it is poured into, so to speak. What goes for security practices in general most definitely goes for open source as well. If anything, it's twice as relevant. A fine example, courtesy of InformationWeek Reports, is how open source security tools can be a two-edged sword -- since the exact same tools are available to an attacker as well as a defender.

I've written before that open source is not a guarantee of security, but certainly an enabler. There need to be people in place, people who understand security from the inside out as a process, to audit your code for security issues. This part we all know. (I hope.)

What many forget, though, is that because of the way open source is meant to be re-used, derived code can be found in a panopoly of places -- and there's plenty of opportunities for something to be derived from a source that bears little resemblance to its current use. Something which was written for a desktop environment might find pieces of its code recycled into a kiosk setting ... and that "innocent" unchecked array boundary problem may turn into a hole big enough to drive an eighteen-wheeler through.

Not good. So: Given the way open source is produced, then, how would we go about using a piece of code with confidence that it's been vetted -- other than the mere fact that it's open and anyone can look at it? How do we continue to use open source without simply crossing our fingers and hoping for the best?

One idea I have is an organization of security experts who are specifically devoted to auditing open source projects. For a yearly fee, you could enroll as a client and submit code to folks willing to pick it apart line by line -- a kind of for-pay peer-review security process. To make it a little more manageable for the little guys, fees could be prorated based on the number of lines of code you submit per year, with maybe some additional prorating for interpreted vs. compiled languages.

The audited code could be given a "Good Codekeeping Seal of Approval" -- perhaps in a series of gradations to indicate how much confidence the auditing organization has in the code. Said auditing information could be published along with the software license, although any changes or derivative work would have to be re-certified. (You could, for instance, match the granted certification to an SHA-1 fingerprint for a given iteration of the source.)

I don't doubt for a second this concept would bring up a whole sheaf of new problems, just as it solved some existing ones. For one, what kinds of standards would such an organization have for screening prospective experts? And would we get the kinds of clashes over auditing that we get now over different licensing terms?

One thing I am sure of: we're long past the stage where just having open source alone is not enough of a guarantee of integrity. Especially if we're all turning into either Oppenheimer or Teller whether we want to or not.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Slideshows
What Digital Transformation Is (And Isn't)
Cynthia Harvey, Freelance Journalist, InformationWeek,  12/4/2019
Commentary
Watch Out for New Barriers to Faster Software Development
Lisa Morgan, Freelance Writer,  12/3/2019
Commentary
If DevOps Is So Awesome, Why Is Your Initiative Failing?
Guest Commentary, Guest Commentary,  12/2/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
The Cloud Gets Ready for the 20's
This IT Trend Report explores how cloud computing is being shaped for the next phase in its maturation. It will help enterprise IT decision makers and business leaders understand some of the key trends reflected emerging cloud concepts and technologies, and in enterprise cloud usage patterns. Get it today!
Slideshows
Flash Poll