CloudFlare keys snatched using Heartbleed

CloudFlare keys snatched using Heartbleed

Summary: CloudFlare's analysis Friday that Heartbleed may not be able to recover private keys turns out to be wrong. Two candidates recovered the keys from their challenge server.

TOPICS: Security

Two successful attempts have been made at recovering private server keys from CloudFlare's Heartbleed challenge server.

The two winners are Fedor Indutny and Illkka Mattila. Indutny, who succeeded first, made 2.5 million Heartbleed requests over the course of the day and Mattila made 100,000.

CloudFlare rebooted the server at one point during the test which they say may have contributed to the successful attempt, but

As Dan Kaminsky points out, even the researcher who found Heartbleed thought what CloudFlare thought:

Kaminsky makes other good points and his blog is well-worth reading if you are a system administrator or CISO affected by Heartbleed. His advice is to patch immediately, especially Internet-facing systems. This should be your immediate focus before dealing with revoking and reissuing certificates or helping users change passwords.

Topic: Security

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Microsoft has been burned by the "highly unlikely" mentality.

    Perhaps everyone else should learn from their hard knocks.
  • Was that really a "reboot", or a "restart".

    Some restarting services are not restarted from from scratch... Instead of doing a exec, they just reset their state to the beginning and continue - and their "state" does not reset/clear memory in buffer pools.

    pl = p;
    if (1 + 2 + payload + 16 > s->s3->rrec.length) return 0; as patched

    The subject line compiler instruction also works on pre patched f and down 1.0.1 openssl's.

    Why not perform contests with all types of patches and other security software.
    Is there a real need for the heartbeat functionality at all RT test ?

    CloudFlare provided the real time proof, while most other dithered around
    • Just for 2 spread it

      -march=i486 is implemented by default in the g configure script.

      As of gcc 4.7 and up -march=(some cpu) creates executables that in many cases
      only runs on the assigned cpu architecture , in most cases even crashed especially
      with high level multi core archs deployed, on other cpu archs.

      Search and replace the configure script and use -mtune=(some cpu) instead.
      I' tested all types of -mtune build's on an array of cpu archs and all
      portings have been successful and rockstable.
  • capturig the private keys ups the ante significantly

    Opens the door to site spoofing and other risks and there have been several browser workarounds that disable server cert checking... not sure how that would affect an revoked cert but it might be accepted... depending on the configuration.

    Then there's this from Google in 2012: Google plans to remove online certificate revocation checks from future versions of Chrome, because it considers the process inefficient and slow.
    • revocation checks have always been nearly useless.

      The problem is that searching revocation lists constantly takes more time.

      You never get to remove any entries - thus they always grow.

      The larger the list, the slower it gets.

      And it has to check revocation at every level of the certificate list.

      If a certificate has a CA + company CA + application, then that is two or three checks... against two or three lists

      Quite frequently, that is a CA + company CA + division CA +...

      You get the picture.

      Second, updating the CRLs takes time... And a number of agencies don't even bother - they depend on certificate expiration which is embedded within the certificate.

      And if you try for a central CRL... remember that the government has over 15 million certificates alone - and that would potentially allow the CRL to grow at about 1 million per year alone.

      Instead, CAs will issue a something like a root certificate for 10 years. Though if they pay more, longer - AOLs certificate was issued for 35, and their AOL Member CA for 30. Other business CA is for 20 years...

      Microsoft, on the other hand, uses 2, 3, and 4 years.

      And every certificate issued should properly be entered in a CRL when revoked...

      Instead of doing so, they try to have the expiration occur before the probability of an exposure.

      Revocation lists are truly horrible. If you think DNS is a bit slow now (up to around 10 seconds for a new entry search) think how much longer it will be to add a .5 second revocation check will add to each level of DNS lookup (after all, it gets recursive as you have to do a DNS lookup to get to a CRL... and each level may need its own CRL server, that must also be verified...).

      It works acceptably well when there is only one level to check. Tolerable (barely) when there are two (you get to locally cache the entries after the first check of the two levels), but caching starts to fail when there might be hundreds of checks for a single web page from dozens of different ads. The function of managing the cache (which definitely grows) alone would take a significant amount of time.
      • Quite simply SSL/TLS has been permanently broken by Open SSL

        without revocation checking revoked certificates will live on at least until their expiration so... 20-30 years... may as well be permanent.

        Of course somebody could tweak Revocation checking to make it more robust and responsive just as they tweaked SSL to add heartbeat to avoid overhead.

        SSL as we know it will have to change just as Airplane tracking must change. There is no continuation of business as usual from this point because you cannot know your exposure and you cannot control your exposure.
        • permanently broken by Open SSL

          Would rather express it as being broken by lack of coordination due to
          a vast array of IT technicians picking up their knowhow from commercial
          software packages.

          And an almost hysterical quest to add new features.

          I've seen comments where IT technicians are referring to "misuse of memory management"
          While referring to TLS as the thread local storage cpu/kernel threadsafe RT utility
          while in fact the Openssl problem relays to the TLS internet protocol.

          A total lack of coordination to implement the same name
          on two completely different functions, due to not so solid education
          of responsible IT managers.