If public key cryptography were really broken

If public key cryptography were really broken

Summary: The solution to a mathematical problem generally considered insolvable would doom almost all trust on the Internet. Could it actually happen? We'd better hope not.

TOPICS: Security

The asymmetric cryptography on which so much security on the Internet is based relies on one of two mathematical assumptions to work: that it is impossible, other than through brute force, to determine the factors of large integers when those factors are two or more large primes, or (to quote Wikipedia on the subject of Elliptical Curve Cryptography) "… that finding the discrete logarithm of a random elliptic curve element with respect to a publicly known base point is infeasible".

The elliptic curve y2 = x3 - 3x + 3. Credit Adam Langley at ImperialViolet

"Brute force" in this case means that you have to iterate through all the possibilities until you find one that works. In a good, modern algorithm, the number of possibilities is extremely large, making it time-consuming even with vast computing resources.

What if these assumptions were no longer valid? What if, for one reason or another, factoring the products of large primes and/or determining the discrete logarithm of a random elliptic curve element became something which could be done quickly with reasonable computing resources?

Such a development would really suck. It would mean that the security on which the overwhelming majority of the Internet and non-Internet computing is based would be useless.

Cryptographer general security guru Bruce Schneier considered this problem in a recent blog entry called "The Cryptopocalypse" in reaction to a presentation at the recent Black Hat conference. Schneier is not especially worried, and his reasoning makes sense to me.

With respect to factoring, which is the much more common method, he cites an observation he made in 1999:

Factoring has been getting easier. It's been getting easier faster than anyone has anticipated. I see four reasons why this is so:
  • Computers are getting faster.
  • Computers are better networked.
  • The factoring algorithms are getting more efficient.
  • Fundamental advances in mathematics are giving us better factoring algorithms.

He adds that the issues are very similar for the discrete log problem.

Computers are indeed getting faster all the time, and networks are becoming more prevalent and faster. This means that it's easier to throw computing resources at a problem like brute force key cracking. The third one happens, but not as radically or regularly as improvements in CPUs and networks.

It's the 4th one that's the real dystopian nightmare. It's the only one that can really make PKI obsolete. This is, in part, because the other 3 reasons make cracking faster, but at the same time standards are advancing to make keys larger and more difficult to crack. The Black Hat presentation Schneier refers to notes that this race may force RSA keys (the keys used with factoring) to unmanageably large sizes, but ECC remains as an escape valve, because it needs much smaller keys and exponential time to crack.

It's true that changing software is a burden and, because of that, many companies are still using old, obsolete crypto algorithms, but the fact remains that modern, strong crypto options are available to those willing to invest in them.  But no matter how much you invest in PKI, if the basic mathematical assumptions behind them are broken, the security is non-existent.

Schneier is not especially worried about it. He disputes the significance of recent mathematical work claimed to address the discrete log problem and says he knows of no mathematical work that would really undermine the important assumptions.

This only makes the problem a little less scary. Progress on cracking ECC may be happening even if Schneier doesn't know about it; after all, you could sell such technique for a hell of a lot of money. The main reason I won't worry about it is that I may as well worry about a comet colliding with earth. There's nothing I can do to defend myself and it may never happen, or at least not for so long that solutions will be available. But it does give me pause to think that almost all trust on the Internet rests on the intractability of one specific mathematical problem.

Topic: Security

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Human Ingenuity Works Both Ways

    If people are smart enough to figure out fast factorization and discrete logs, they'll also think up new number-theoretic problems that will be even harder to solve, that can be used as the basis of encryption algorithms.

    After all, the entire past history of scientific research has shown that each answer spawns ten new questions.
  • Re:

    what you have shared in this article is very knowledgeable and fantastic. keep sharing this type article to keep in touch people with your self. i really like your article.
    muhammad lal
  • Missed the fifth problem

    And that is time. With all the data sent across the internet being archived and advancements in decryption / computers, all that is required is time. Something encrypted 20 years ago would be fairly easy to crack with todays quad (or more) CPUs. Any thing encrypted today will share the same fate at some point in the not so distant future.
  • The point of encryption

    The real point of encrypting data is to keep the data safe for from prying eyes for reasonable time periods. If the encryption can stand up to brute force attacks for 200 years or more for any reasonable time period in the future; say 50 years. What historical "cracks" encryption are mistakes made in implementation, known patterns within the message (stylized greetings, etc.), and other careless practices.
    • I don't

      I don't believe any encryption can last 50 years, maybe 10 - 15.
  • ECC

    Who owns the patent for that?
    Susan Antony
    • Depends on which version you use

      Regular ECDSA is not covered by any patents as far as I know. There are some versions of it that are, though.
  • Internets

    This kind of discussion highlights a bigger problem/challenge/issue: in our lifetime, there will no longer be the internet... it will be internets. Iran is already working on this. Today, security is so difficult to maintain, and hardware advancements are so fast, that the only way to really be protected is to stay out of the battle and go private. This seems a pipe dream now, but advances in wireless mesh technology (for instance) could mean that in the future entire nets of millions are created and/or abandoned in months or a few years. Another idea is the use of changeable avatars. Screen names are a crude version of this; in the future, you may never enter the net yourself. Rather, you send a very powerful avatar out to represent you. Only you and your closest friends know the connection; if your avatar is compromised or unsuitable, you get another.
    • I could see two possibilities

      Internet becomes limited to media/network layer utility, with multiple private protocols established on top. Everything at packet layer and below is encrypted. The other possibilities are true private networks built on everything from mesh to powerline technology. But as with existing physical plants, mass adoption leads to centralization leads to central control and monitoring. As long as there is a chokepoint, the snoops will find it easy to tap and censor.
      terry flores
      • hardware encryption, SSL, TLS

        I believe encryption of all the sensitive internet traffic is already a reality.
  • How do we know that it isnt already broken?

    Knowing that governments wish to routinely mine every byte in and out of our computers (GCHQ anyone?) you'd think it would be easy to obtain privacy by encryption.

    If that were the case, surely GCHQ is wasting an awful lot of time, money and resources into installing monitoring that can be defeated by free, opensource or proprietary encryption at source.
    I'd think that in this scenario its far more likely that we are told it is uncrackable where in fact it is just hard for the public to do without access to the kind of computing power governments are known to have (and probably have more hidden away that we dont know about.)

    I certainly wouldnt trust any encryption from a major company that possibly has a remit to government somehow, and with that government's inability to stop children breaking their systems for lulz I'm not sure I trust them to secure the data they lifted from me covertly either.
  • Encryption

    If you can insure a twenty year time to crack, you are safe as an individual. Before someone is going to spend the money cracking a message, they have to be absolutely sure the message has sufficient value to justify the work. It would be a real bummer to spend all that money to crack something only to find out it was a nursery rhyme. That is why there is a definite advantage in encrypting everything. The vast majority of the stuff would be of no interest to anyone.
  • Quantum Computing

    There's also the problem of emerging quantum computers, which can solve factorization problems fairly easy (see Shor's algorithm: http://en.wikipedia.org/wiki/Shor%27s_algorithm).
    DWave offers a 512 qbit quantum computer commercially, although there's still some debate regarding their adiabatic quantum computation mechanism really is up to the task...
    So, the internet community should channel the common effort in developing QC-safe cryptographic mechanisms which do not change the usage scenarios, but the implementation details (see Post-quantum cryptography at http://pqcrypto.org/).
  • smaller math advances could be a problem

    What if an applied mathematician came up with a way to make brute force not so brutish? That is, to tell from the history of one's past efforts (trying primes x1, x2, etc. and seeing the results (ratio and remainder) of dividing the key by each, what other primes were and were not worth trying? Then the process might become one of converging on a solution through many iterations, rather than randomly trying one prime after another. There would still be no "solution" to the problem, mathematically, but it would become computationally tractable. I'm not a mathematician, and maybe there are reasons why this is just as impossible as a solution, but I worry about it.