Tuesday was supposed to be about Windows XP, and basically it was, but there was another event which was of greater security significance: Heartbleed.
Heartbleed is a catastrophic bug in OpenSSL, a software library of great significance, used by almost everyone (except Microsoft) for their SSL/TLS code. It's a truism of cryptographic applications programming that you don't write your own actual crypto code because it's so important and really hard to do right; you trust the operating system's library or some other well-established library, and in the real world that means, as a practical matter, either Windows Cryptographic API or OpenSSL (although there are others).
Obviously it's of critical importance that this code be as correct and unexploitable as possible. To whom is it important? To everyone with an interest in private communications. Heartbleed makes a mockery of this protection by exposing memory in the server, including private keys, to attackers. It has existed since December 31, 2011 and been in wide, and growing. use since the release of OpenSSL 1.0.1 on March 14, 2012.
It's interesting that Heartbleed came out of a new feature added to TLS, the Heartbeat Extension. Last night I observed a Twitter fight between two security experts I follow (Dan Kaminsky and Thomas Ptacek) over whether it was a good idea to add the Heartbeat Extension to OpenSSL. The problem is in the implementation, not the protocol, but with such critical code (one of them argued) spurious features should not be added.
Is the Heartbeat extension spurious? That was the heart of the argument. The Heartbeat allows, as the RFC says, "...the usage of keep-alive functionality without performing a renegotiation..." With all due respect to @tqbf, clearly this is of value, if not essential to the core protocol. The problem wasn't including the Heartbeat, it was including it without sufficient scrutiny.
In fairness to the OpenSSL team, I don't know how much scrutiny they subject their code to. But, even so, it can't ever be enough. One of the suggestions that @dakami makes is "I believe strongly in federally funded source monitoring of important projects. Social good, social burden."
Now this is thought provoking. Even as a relatively libertarian type, I think the government has always had a proper role in the establishment and maintenance (including security) of critical standards. (It's even in the Constitution, Article I Section 8: "The Congress shall have Power To... fix the Standard of Weights and Measures...") It's not much of a stretch in my mind to extend this power from actual standards, with which NIST is concerned, to critical implementations of software standards.
The system could work through grants to some outside responsible organization, perhaps even a private company, to perform audits on software determined by the responsible government authority (it could be NIST as well) to be critical.
There's never enough money to go around for these things, but I'd suggest that the security of this and certain other code is critical enough to private industry that they could be induced to make contributions to a fund for it. Any such audit work should be made completely public of course. As Matthew Green, who teaches cryptography at Johns Hopkins, puts it, "The best contribution would be for the 5 biggest tech companies to each pledge 1 dev for 2 years. No strings." When you put it that way it doesn't sound like a lot.
Many private companies already perform security audits on their own code or on open source components that they use. Apparently it's not enough and it never really can be enough. (Did anyone already do an audit of the OpenSSL TLS Heartbeat implementation and miss Heartbleed? That would be embarrassing.)
What other programs are critical infrastructure? Sound off below.