'Triple handshake' bug another big problem for TLS/SSL

'Triple handshake' bug another big problem for TLS/SSL

Summary: Apple recently patched a vulnerability in SSL/TLS code in iOS and OS X. No, not Heartbleed, but one which is, in some ways, worse.

TOPICS: Security

You could miss it if you weren't paying close attention through all the Heartbleed blather, but last week Apple patched a severe problem in their TLS/SSL code in iOS and OS X. An attacker in a privileged position, i.e., between two parties engaged in SSL/TLS (henceforth just "TLS"), could intercept and decode communications or inject commands and data.

The bloody triple handshake logo, credit @Raed667;

The bad news is that this isn't just a bug in Apple's code; it's a bug in the TLS protocol itself, a protocol which appears to be quite a mess.

Professor/Cryptographer Matthew Green of Johns Hopkins says that the triple handshake changes our concept of what security means in the context of TLS. (He has also decided to learn one lesson from Heartbleed: When it comes to bugs, branding is key, and so he calls it "3Shake" and has commissioned the nearby logo.)

Apple credits the report of the bug, now CVE-2014-1295, to Antoine Delignat-Lavaud, Karthikeyan Bhargavan, and Alfredo Pironti of Prosecco at Inria Paris, and describes it in this way:

    In a 'triple handshake' attack, it was possible for an attacker to establish two connections which had the same encryption keys and handshake, insert the attacker's data in one connection, and renegotiate so that the connections may be forwarded to each other. To prevent attacks based on this scenario, Secure Transport was changed so that, by default, a renegotiation must present the same server certificate as was presented in the original connection.

Delignat-Lavaud, Bhargavan and Pironti found this a while ago and, in the best traditions of research, have been disclosing it confidentially to those responsible for affected software. Read far more gritty detail on the attack, variants of it, and the progress of remediation at their web site.

The image below demonstrates how the attack works. Don't try too hard to understand it. Professor Green describes it as "absolutely insane."

Triple handshake attack. The attacker mediates two handshakes that give MS on both sides, but two different handshake hashes. The resumption handshake leaves the same MS and an identical handshake hash on both sides. This means that the finished message from the resumption handshake will be the same for the connections on either side of the attacker. Now he can hook up the two without anyone noticing that he previously injected traffic. Image credit Matthew Green.

I'm not going to try and elaborate on the explanation, as I don't think there's any simple and honest way to explain the mechanism. It's a man-in-the-middle attack with full takeover control. That's what you need to know. And it affects many TLS implementations. Yes, it requires that the attacker be in a privileged position, but effectively Heartbleed does too, because you need to get the traffic somehow in order to decode it. 

Which implementations are affected and which have been remediated? Delignat-Lavaud, Bhargavan and Pironti maintain a partial list in the "Disclosure and Vendor Response" section of their web site. Not all TLS implementations and applications are affected, though a large number are. Of course we know that Apple's was affected and has now been fixed. Some notable status reports:

  • SChannel (Internet Explorer): notified October 18, 2013. Security update under test
  • OpenSSL, GnuTLS: notified on October 20, 2013. Not directly affected, but applications using them usually are. Mitigations pending adoption of new TLS extension
  • NSS (Chromium,Firefox): notified November 4, 2013. Prevented degenerate Diffie-Hellman public keys in CVE-2014-1491. Firefox correctly checks server certificates during renegotiation. (So it's fixed.)


As Green describes, the triple handshake is an abuse of features (he uses the term "band aid") which were put into TLS in order to fix previous man-in-the-middle attacks. TLS has several methods of "handshake," which is the ritual communication performed between parties in order to establish a secure connection. 3Shake requires that the parties use certain of these methods and it's not clear to me how often they are used, but the feature wouldn't be in there if someone didn't want it.

There are several solutions proposed by the researchers who reported the triple handshake to Apple. The temptation will be to do what Apple did, to put another check in to the handshakes ("...a renegotiation must present the same server certificate as was presented in the original connection."

Delignat-Lavaud, Bhargavan and Pironti's diagram of the Triple handshake. Does that clear things up?

But Green argues that the problem is the mess the protocol has become. Recall that TLS began life as a hack built by Netscape for their browser in the days when version 0.1 was good enough to ship. Over the years features and fixes have been added mostly because someone demanded them, not because any clear review was given to what we need TLS to do. The protocol came first, then the analysis of it.

Unfortunately, it's a bit like SMTP, whose problems we are doomed to suffer for all eternity. Too many people rely on the messy TLS we have right now to change it on the fly. Make a new fixed-up and clean TLS 2.0 and nobody will use it. Every now and then a transition is made from a badly insecure product to a more secure one — the transition of old Microsoft Office file formats to the new ones comes to mind — but the transitions are painful and I bet, if there isn't a very powerful party to force things, like Microsoft with their Office code, the transition just won't happen.

Time for the IETF TLS Working Group to stock up on band-aids.

Topic: Security

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • FOSS just keeps scoring

    "Many (blind) eyes approach". LOL. Biggest FOSS joke for a while. Thou can only fool people for so long.
    • Swing and a miss. You clearly don't understand the issue

      Microsoft's SChannel has it as well, as did Apple's proprietary, non-FOSS implementation.
  • Which is why propriatary software has been failing...

    and being replaced for critical infrastructure.

    You are blaming the wrong thing.

    TLI/TLS is an IEEE standard. If you want to blame anyone blame them if the standard is wrong.

    Now if an implementation is wrong, show how to fix it...
  • sorry, disagree about the fix

    "Make a new fixed-up and clean TLS 2.0 and nobody will use it. "

    Wrong. Look at SSL and TLS now. Or encryption algorithms. SSL 2.0. SSL 3.0. TLS 1.0, 1.1, 1.2. All coexist today. Endpoints negotiate which to use from a list that is in preference order and pick the best common choice. Build a TLS 2.0 (or whatever) and add it to the list, and as more end points implement it, then it will automatically get used. Same as all the other versions floating around.
  • The standard is the problem ...

    This functionality is apparently used both in the open source and the proprietary world. While the problem is now being patched, Mr Seltzer seems to be suggesting that the whole process needs a complete redesign from the ground up. Since all relevant apps on both the open source and proprietary side would be affected, this creates the old chicken and egg scenario as to how it would all get choreographed. Whoever goes first will lose compatibility with everybody else. It is a dinosaur from the early days of the Internet that is defective by design. This makes it a much more foreboding problem than heartbleed. And unfortunately, I suspect this is not the only example of this sort of thing. Just look at the mess we are facing with IPV6. We are flat out of IPV4 addresses. The solution is IPV6. But few of the ISPs are ready with IPV6 and many devices currently being sold still do not support IPV6.
    George Mitchell
    • That is because most of them like being in control

      over who and what you can connect to.

      IPv6 removes that - and they don't like it.

      Want privacy in your mail? run your own mail server. That way you always know who gets to read it. If there is a subpoena then it has to go to you... and can't be hidden by a NSL, as you would also be the one to get that.

      Want your own web server and not pay through the nose? run your own.
  • Heartbleed primary attack doesn't require MITM, Larry

    Larry, you said, "it requires that the attacker be in a privileged position, but effectively Heartbleed does too" ... that's not true. Heartbleed exposes info - that in and of itself is one form of attack, that requires no "privileged position".

    I think what you mean is, one possible follow-on attack is to use a server's private key (retrieved via Heartbleed) to then do a completely separate man-in-the-middle attack and have access to all content exchanged - that definitely does require your "privileged position" (by which you mean, having poisoned DNS or in some other way thwarted the mitigations against man-in-the-middle attacks).

    Would that the only impact of Heartbleed was as an enabler for later MITM attacks ... in fact, most of the impact will be exposed passwords likely.
    • It does and it doesn't

      To just get the key you don't, but to have anything to decrypt with the key and not just random bits of memory (although if you hit it enough you can get a portion of the current traffic but not the old encrypted stuff from the last 2 years) you need to have captured a lot of traffic involving that site, which requires a privileged position.
  • Most Internet Protocols were never intended for their current uses

    which is why the whole boatload (baby and the bath water) should be thrown out. SSL/TLS is just one of many that need reengineering. Which might incorrectly imply they were engineered in the first place.
    • Oh they were engineered all right.

      But like anything else, you can misuse what was designed.

      RSA was not designed to provide that kind of security... It was designed to provide encryption between one point and another, allowing both sides to know that no one else could get the data...

      Distribution of keys was NOT part of the algorithm.

      But the implementation added public key exchange...

      Still did not address how you knew if the key was valid... again, not part of the algorithm.

      And since RSA is VERY slow, and symmetric encryption MUCH faster, a random symmetric key was added - with the assumption that it would be valid when passed in an encrypted channel. But that STILL doesn't address how to know if the setup keys for that channel was valid.

      It is a bit like trying to use a screwdriver instead of a hammer for driving nails... a misuse of an engineered tool.