Will updates of the Internet's most trustworthy security barrier get the support they need before we have another breach a la DigiNotar?

Larry Seltzer, Contributor

September 23, 2011

4 Min Read

One of the most important technologies on which secure computing is based is the Secure Sockets Layer (SSL). Most of you know it as the thing that puts a lock icon up in your Web browser and maybe turns the browser bar green, but it's much more widespread than that. Both your Intranet and public Internet servers probably use SSL, a.k.a. HTTPS, extensively. If your users are accessing Exchange remotely they are likely to be using SSL for that. Many of your servers, such as your Web and database servers, talk SSL to each other. On the whole it has been an effective standard so far, but the outlook for it is getting murky.

On Friday, a researcher at a conference in Buenos Aires demonstrated an attack that compromises the confidentiality of SSL communications at the browser. In late August, a significant certificate authority (DigiNotar of Holland) was compromised and was manipulated into issuing fraudulent certificates for Google and other large sites. All this is happening at a time when many security researchers are already dismissive of the trustworthiness of the whole certificate authority (CA) business, what we call the Public Key Infrastructure (PKI).

So how screwed are we?


More than half of all SSL/TLS Internet servers support the insecure SSL v2.0 protocol. Only a handful support the secure TLS 1.1 and 1.2 protocols.
Data credit: Ivan Ristic of Qualys.

If you're an Internet criminal, there is probably no security barrier you would like to break through more than SSL. It guards virtually all commercial Internet transactions, important server-to-server communications, and remote access to enterprises. We've all just taken its trustworthiness for granted.

The Buenos Aires vulnerability is a great example of how lazy the community has been about strengthening SSL. The vulnerability is a very old one called the "known initialization vector problem" that was previously thought to be unexploitable for practical reasons. The scenario allows the attacker to act as a "man in the middle" intercepting--perhaps even modifying--communications between the parties. That could be you and your bank, for example. Improvements in SSL that would block this attack are many years old, but they are basically unused.

A little background is necessary: The SSL standard, then at version 3.0, was succeeded in 1999 by Transport Layer Security (TLS) 1.0, a very similar standard. TLS versions 1.1 and 1.2 followed in 2006 and 2008, respectively. But SSL/TLS software on the Internet overwhelmingly supports SSL 3.0 and TLS 1.0. Support for TLS 1.1 and 1.2 is almost nonexistent. Even worse, many existing deployments still rely on the horribly insecure SSL 2.0.

RFC 4346 (TLS 1.1) fixes the bug exploited on Friday. The standard states: "The implicit Initialization Vector (IV) is replaced with an explicit IV to protect against CBC attacks".

Support for these standards would have to be built into both sides of the conversation, i.e. browsers and servers. It turns out Microsoft does support TLS 1.1 and 1.2 in Internet Explorer, but disables it by default. Firefox, Chrome, and probably other browsers don't even offer it. Why is this? Microsoft's Eric Lawrence explains their decisions in a blog from several months ago: It's common to find buggy HTTP servers that don't know how to handle TLS 1.1/1.2 requests and return errors even when the client is doing everything right. Why don't other products even offer support? Nearly all of them are based on the OpenSSL library which doesn't support TLS 1.1 or 1.2.

So even though the standard has been fixed, that fix is basically unavailable. The answer will probably be, as Google is doing with Chrome, to implement workarounds in the browser to defeat the attack. They have to do this, but of course it's the wrong way to fix the problem. The right way, widespread support for TLS 1.1 and 1.2, doesn't look like it's happening any time soon.

The problems with certificate authorities, like the one that hit and eventually served a death sentence to DigiNotar, also present no clear solutions. It's likely that many of the biggest CAs are more responsible and safe from such attack, but you can't prove it, nor can you prove that they haven't been successfully attacked. We might just not know it yet.

Famed researcher Moxie Marlinspike and others have proposed an alternative PKI called Convergence that does not rely on trusted certificate authorities. It's too soon to tell if Convergence will work, either as a technical matter or as a business matter.

For the foreseeable future, we're stuck with SSL, and that's reason to worry. A couple of things to consider: Not all certificate authorities are created equal. It might be that you get what you pay for, so don't necessarily go for the lowest price. And remember defense in depth: don't rely exclusively on SSL to protect you. Try to employ additional protections where available, and always be on the lookout for suspicious happenings.

For some analysis of the Friday announcements, including links to the research paper and proof-of-concept attack code and some mitigation techniques, see this entry at the Internet Storm Center.

About the Author(s)

Larry Seltzer

Contributor

Follow Larry Seltzer and BYTE on Twitter, Facebook, LinkedIn, and Google+:

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights