Heartbleed soul-search: regulation proposed for critical crypto code

Heartbleed soul-search: regulation proposed for critical crypto code

Summary: Sophos' James Lyne delivers an impassioned speech on how we got to the point Heartbleed was possible and why we shouldn't be surprised it happened.


Was OpenSSL's memory management code a disaster waiting to happen? While Heartbleed is being blamed on buggy source code that was submitted shortly before midnight on 31 December 2011, a time not associated with good judgement, there are deeper concerns.

As pointed out by Theo de Raadt, founder and leader of the OpenBSD and OpenSSH projects, in a post on Tuesday, OpenSSL doesn't use the standard memory management code supplied by the operating systems on which it runs — the function calls familiar to Unix and Linux developers, malloc() and free(), to allocate and free up memory respectively.

OpenSSL rolls its own memory management system.

"Definition of not awesome," said James Lyne, global head of security research with Sophos. Had OpenSSL been using the system-provided memory management, the odds that a memory exposure bug like Heartbleed would reveal private encryption keys would be "ridiculously lower", he said.

Malware researcher Jake Williams principal consultant at CSRgroup Computer Security Consultants, agreed. "Had the developers not been rolling their own memory allocation scheme, it's very likely that in some cases this may have caused some kind of [general protection fault or segmentation fault]. It may actually have been a denial of service bug in many cases, or it definitely would have been much more the luck of the draw [when] pulling private keys or private data."

The two researchers were speaking at the third Heartbleed briefing for the SANS Institute's Internet Storm Centre (ISC), held on Friday morning Australian time (Thursday afternoon US time). Lyne stepped back from the immediate issues of tackling the bug to reflect upon why this happened in the first place.

"When you actually go and look at this bug ... this compared to the majority of exploits that we have to go dig for today, with all the wonderful mitigations out there in modern operating systems, it's really easy to find, and really dumb. This shouldn't happen."

"This seems fundamentally wrong, at this point of use of this software, that we should be in a position where these kinds of things can happen."

Lyne said that he recognised that security was never 100 percent, and that all projects have bugs, but given the importance and widespread use of OpenSSL, it was possible to consider it as an example of "critical infrastructure".

"I mean this should be stuff that's taken seriously — regulated even — given the serious role that it plays in the internet," he said.

"And I think a lot of it comes back to this perception of the technology. We all let it slip into incredibly widespread use — sprawling and expanding into all these different places — with the perception that it's open source, therefore, as a black box, it's secure because other people are looking after it for me.

"It's the other people's problem."

Many people trust open source, simply on the basis that they expect that other people have looked at the code, Lyne said, but pointed to the lack of funding for OpenSSL as an problem with relying on the project.

"We're all depending on this really, really heavily, expecting them to do a great job, and yet actually they're desperately under-funded."

"I don't want to say that open source is bad. I'm a huge believer in the initiative, and what it can do for the quality of software.

"But a lot of people trust open source as secure because others are looking, but in reality this team has a reported budget for all of their work of less than a million dollars, and through the course of this week — which you'd think would be a fairly important week for them — they have received $841 of donations. Which is sad."

"There's a section on the site here that says, if you give more than, I think, it's $20,000, we'll put your logo on our home page. There are no logos. No-one is giving these guys money."

Lyne said that whenever he comes across self-rolled cryptographic code, it is "so unbelievably terrible that my soul hurts a little bit every time I look at it".

"There was a great quote today, someone said: 'Crypto people shouldn't be allowed to write software.' And someone retorted wonderfully with, 'Software engineers shouldn't be allowed to write crypto.' You guys should talk to each other."

Given this context, Lyne would doubtless consider de Raadt's view, "OpenSSL is not developed by a responsible team," overly harsh. But both would agree that something needs to be done.

Topics: Security, Open Source


Stilgherrian is a freelance journalist, commentator and podcaster interested in big-picture internet issues, especially security, cybercrime and hoovering up bulldust.

He studied computing science and linguistics before a wide-ranging media career and a stint at running an IT business. He can write iptables firewall rules, set a rabbit trap, clear a jam in an IBM model 026 card punch and mix a mean whiskey sour.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • OpenSSL rolls its own memory management system

    That is an advantage as one don't need to patch a "system" or be dependent of such
    in the case of detected flaws, as such will always occur.

    Whatever "security code" one produce, it can never be 100% failsafe.
    The main issue is how fast it can be patched, been running such a patched openssl
    for, close to a week now.

    Beat that you "OS" producers
  • violation of basic programming rule

    you never allowed input data to control execution of your program. in this case the value for "payload length" was accepted without checking. which is how OpenSSL came to process what is essentially a WRONG LENGTH RECORD

    the attacker submits a heartbeat record requesting a 64k payload to be returned but supplies only 1 byte to be used as that payload. so, without checking the program runs over the one byte plus another 64k -- resulting in exfiltration of the server's data

    what was missed: this code was not properly peer reviewed. the reviewer may have been rushed or may not have understood this basic rule of program quality. in either case the result is the same: improper review and defect delivered into market.

    i fought with programmers who felt they should not have to edit their inputs for years. i had a sign on my wall:

    "Do not come in here with a core dump in your hand and say your program was ok because it ran yesterday". this was because in many cased a keypunch error would abend the guy's program.
  • The industry asked for this...

    Why, because a huge amount of very rich guys (and gals) decided to get even richer by building an entire infrastructure off of an OS that started as a schoolboy's hobby and has a lot of parts initially put together by folks in their spare time. Well meaning and all, and it was a good idea at the time, but now the results are coming home to roost.

    Once open source software started to make it into REAL usage, then all the rules change and it has to grow up. And considering big-time pay-for-it IT has problems with that (look at how long it has taken to bet a version of Windows where most of its vulnerabilities are through the third-party stuff that runs on it - and we PAY for that) and it is understandable. But the MENATLITY is the same - hell it works, so just use it. Then this happens. Surprised?

    The real fun hasn't even started. This is going to be for a lot of big companies and their IT departments their "GM ignition switch" when the politicians get their hands on it. Everyone better hope that a "Target" doesn't get traced directly back to this (and who knows, since every detail of that has never been released, if it wasn't) because if it does, it will make GM, Snowden and the NSA, and Target look like minor bumps in the road.

    And expect it all summer. Oh, and expect EVERY IT department to start asking questions about EVERY piece of software in the shop - especially if it is open source. Hang on, it is going to be a bumpy ride..