Security and the Linux process

In his latest entry, Dana asks whether the Linux process is insecure, because it's not possible to warn the "vendor" before warning the general public about security flaws in Linux. He also notes that "Microsoft has theoretical control of this situation.
Written by Joe Brockmeier, Contributor on

In his latest entry, Dana asks whether the Linux process is insecure, because it's not possible to warn the "vendor" before warning the general public about security flaws in Linux. He also notes that "Microsoft has theoretical control of this situation."

There are several problems with this line of reasoning. I'm not going to argue that the open source model of development is perfect, but it offers several advantages over the proprietary model. Let's start with the most obvious.

Yes, if I discover a vulnerability in the Linux kernel -- or any other open source project that does development on public lists and completely out in the open -- when I reveal the problem on the development mailing list, I reveal it to the public. It's worth noting that some open source projects, like Mozilla Foundation, have systems that allow developers to file bugs and security issues without disclosing details to the public at large.

But, for projects like the Linux kernel, most of the development is done in the open. However, Dana's scenario assumes that the person discovering the exploit decides to broadcast their finding on the mailing list to all of the developers simultaneously, which isn't necessarily the case. It's also possible for developers to communicate privately without discussing a security flaw on a mailing list. In fact, this happens quite a bit. But, assuming someone decides to do so, the other kernel developers can start work on the exploit immediately when something is disclosed publicly.

With Linux, it's not only possible for developers and researchers to comb through the Linux source code to look for vulnerabililties, it's also possible for those folks to submit patches that fix the vulnerabilities in question. It's not uncommon for someone to uncover a vulnerability, report it and submit a patch for the vulnerability. Unless you happen to work for Microsoft, this isn't possible with Windows and other Microsoft products. With proprietary software vulnerabilities, users are usually dependent on a single vendor to react to the flaw and issue a patch. Given Microsoft's track record for responding to vulnerabilities and issuing patches, that's not a comforting thought. Case in point, a highly critical flaw in the Microsoft Jet Database Engine that was announced in April, that remains unpatched. According to Secunia, exploit code has been posted to a public mailing list, and Microsoft has yet to issue a patch.

Assuming a researcher submits a report about a security vulnerability to Microsoft before making it public, they are then dependent on Microsoft to fix the vulnerability. The researcher typically will not have access to Microsoft's code, so they are essentially doing "black box" testing -- whereas anyone can assist with finding security flaws in open source code just by reading through the code itself. This concept scares people who still believe that "security through obscurity" is a good way to try to make software more secure. However, given the number of spectacular security flaws that Microsoft has seen over the years -- along with some very effective exploits -- I think we can see that this doesn't truly afford proprietary software vendors any real advantage.

And, as I said, that's assuming that a researcher decides to play nice and inform Microsoft of a vulnerability before they disclose it to the rest of the world. That's not a safe assumption, by any means. Microsoft may have "theoretical" control of the situation, but in reality, they have no more control over vulnerability discoveries than the Linux kernel team. A look at the Bugtraq list will show that many disclosures seem to be made without first being run past the software vendor -- as well as several frustrated individuals disclosing flaws that have been reported to the vendor with no response. There's another advantage to initial public disclosure: Once everyone knows about a security flaw, the urgency to fix it is greatly increased. 

The point that systems are most vulnerable between the time an exploit is made public, and the time that a patch is issued, is well taken. It's up to open source projects and vendors to try to close that gap as much as humanly possible. However, the fact that Linux development is done publicly is not a cause for alarm.

Editorial standards