Heartbleed: Open source's worst hour

People assumed that open source software is somehow magical, that it's immune to ordinary programming mistakes and security blunders. It's not.
Written by Steven Vaughan-Nichols, Senior Contributing Editor

Heartbleed was open source software's biggest failure to date. A simple OpenSSL programming mistake opened a security hole in a program that affected hundreds of millions of websites, and God alone knows how many users, who relied upon it for their fundamental security.


We know what happened. A programming blunder enabled attackers to pull down 64k chunks of "secure" server memory. Of course, a hacker would then have to sift through this captured memory for social security numbers, credit-card numbers, and names, but that's trivial.

We know how it happened. German programmer Dr. Robin Seggelmann added a new "feature" and forgot to validate a variable containing a length. The code reviewer, Dr Stephen Henson, "apparently also didn’t notice the missing validation," said Seggelmann, "so the error made its way from the development branch into the released version." And, then for about two years the defective code would be used, at one time or another, by almost ever Internet user in the world.

Sorry, there was no grand National Secuity Agency (NSA) plan to spy on the world. It was just a trivial mistake with enormous potential consequences. 

So why did this happen? Simple — everyone makes mistakes. Estimates on the number of errors per thousand lines of code (KLOC) ranges from 15 to 50 errors per KLOC to three if the code is rigorously checked and tested. OpenSSL has approximately 300-thousand LOC. Thinks about it.

Still, open source programming methodology is supposed to catch this kind of thing. By bringing many eyeballs to programs — a fundamental open source principle — it's believed more errors will be caught. It didn't work here.

This mistake, while not quite as much a beginner's blunder as Apple's GOTO fiasco, was the kind of simple-minded mistake that any developer might make if tired, and that anyone who knows their way around the language should have spotted.

So why didn't they? Was it because OpenSSL is underfunded and doesn't have enough programmers?

Was it because, as Poul-Henning Kamp, a major FreeBSD and security developer, put it, "OpenSSL … sucks. The code is a mess, the documentation is misleading, and the defaults are deceptive. Plus it's 300,000 lines of code that suffer from just about every software engineering ailment you can imagine."

Was it because proprietary software has more paid eyeballs to look for errors? I have two words for that idea: "Patch Tuesday."

So why did this really go uncaught for so long? Why did Google, Facebook, Yahoo, and even the NSA fail to find such a gaping security hole?

I think I know why and I can sum it up with one phrase: "Magical Thinking." We think that because open source code can be more secure, it is more real secure. Wrong!

Everyone just assumed that OpenSSL must be perfectly safe because, well OpenSSL has a reputation for being safe, therefore it was safe. Developers, website developers, security experts, one and all, it seems no one ever thought to actually use those eyeballs that successful open source relies upon to check the code to see if it really was safe.

We were idiots.

We thought that because OpenSSL was open source that everyone was actually using open source methodology to make sure its code was correct. In reality, no one, after that initial approval years ago, ever bothered to check up to see if the code was both right and secure.

The open source method remains as good as ever when used correctly. When it's not, when we simply assume that all the t's have been crossed and the i's dotted, then we're relying upon faith and not testing and that's doesn't work for any program.

Related Stories:

Editorial standards