Detractors of open source software often point to its broad developer base and open source code as a potential security risk. But that's not a fair assessment, according to Dr Ian Levy, technical director with the CESG, a department of the UK's GCHQ intelligence agency that advises UK government on IT security.
Open source is no worse or better than proprietary software when it comes to security, according to Levy, who busted myths about open source security — and detailed its genuine security challenges — at the Open Source, Open Standards conference in London last week.
"I've done a lot of work on this, there's no objective evidence either way. On average, good open source is about as good as good proprietary, and [bad] about as bad as bad proprietary," said Levy.
Asking whether any piece of software is secure is too broad a question, according to Levy. A more valuable approach, he added, is to ask what security guarantees your organisation wants from a piece of software and then ask whether the software delivers that.
The idea that, because open source code is open for anyone to look at, its security will have been subjected to greater and more worthwhile scrutiny is questionable, said Levy.
Of everyone who had downloaded the Linux kernel code he asked: "'Who thinks they are competent to judge the security of the Linux kernel?' Downloading 21 million lines of Linux code and saying 'I've got the code and I've looked through it', so I can convince myself it's secure, is often nonsense.
"Many eyes give you many eyelashes, and not a lot else."
"Again that's nonsense. If I look at how people break software, they don't use the source code. If you look at all the bugs in closed source products, the people that find the bugs don't have the source, they have IDA Pro, it's out there and it's going to work on open and closed source binaries — get over it."
While some elements of this assertion might be true of some open source projects, "in a lot of open source projects it's not", he said. To offset this risk, learn about the open source project and its history and make a judgement call, he said.
Just because it's open source doesn't mean that it's free from restrictions. It could be in the licence — the GPL places restrictions on you, the BSD fewer restrictions. They may not be relevant to you at all, but there are restrictions."
Even if licensing isn't an issue, organisations can fall foul of separate IP rights conflicts, he said.
Levy gave the example of the distributed compute and storage software Hadoop, which is referred to as an open source project.
"It's a patented algorithm. Forget the implementation. The implementation may be IP-free but the algorithm is patented — do you think you can use it?"
"That would be insane, and yet we still hear this. Around government every piece of software has to be evaluated before we buy — it's utter nonsense. Only security-enforcing functions need security evaluation."
The online distribution methods used by many open source projects are vulnerable to genuine software binaries being replaced with fakes containing malicious code, he said.
"How do we get assurance for online distribution, because a SHA-1 hash and a PGP key sat on the same server as the distribution doesn't do it for me. There've been publicly noted attacks against distribution servers. Nobody's touched the source code, but they've touched the binary, the MD5, the SHA-1 and PGP code around it.
"You've downloaded it and checked the hash but you got the hash from the same place you got the binary. Where's my route of trust?"
The same question over provenance of the code can be raised when it comes to receiving patches for open source software, he said.
"If I go to Windows Update I know it's signed, and I have a process that works inside Microsoft. What do I know about Red Hat? A lot, and it's broadly equivalent.
"What do I know about 'Ian's Honest HTTP Server' software? You're going to have to do the work to assure yourself those patches sensibly controlled."
"Open source patches have to put out the source. They inherently disclose the underlying issue. So if I've got a security vulnerability in a product and I put a binary patch out, it's a chunk of work to reverse engineer it and work out what the underlying thing is. If it's open source, then I'm putting out a source patch and so I'm telling my attacker exactly what the problem is.
"That's not necessarily a bad thing, providing you've got a sensible patching regime. You've really got to have a sensible patching regime because time to exploit is probably going to lower."
Open source projects also have individuals and groups who actively track bugs in the code, which also makes these potentially unpatched bugs more visible.
"If they're open to everybody that's another issue, because now you get zero day exploits because there's no patch," he said.
"How do I know who's written my code, how do I know what they've imported, how do I know what other stuff is in there?" said Levy.
When it comes to imported software modules, for example, an organisation can expect that a commercial company has a team of lawyers that check this imported code for licence compliance and engineers to handle reported bugs.
"How do I get the same kind of assurance with free software? What can I say about the legality of it? How do I know that somebody has looked at the licences of these imported modules and done the due diligence on it?
"I'm not saying you can't do it, I'm saying how do you do it? It's a different set of challenges."
"Change of personality can have a much bigger effect in an open source product than it can in a commercial product. A commercial product has a brand value, an open source product is driven by a bunch of people. You'd hope they are all broadly aligned but there have been there's been spats in open source projects where they've massively changed direction."
Being able to evaluate the security of software relies heavily on having knowing the developers and having some insight into their future plans for the software, according to Levy.
"The security evaluation is about the developer relationship, it's not about source code," he said.
"Anybody who thinks an evaluation crawls through every line of source code looking for vulnerabilities is sorely mistaken — it's about big picture stuff. The evaluation at the level we're talking about is about does the developer have clue what they're doing? Do they have a long term plan for keeping this thing secure? Do they have an incident management plan?
"Extracting product design architecture from code is incredibly difficult. If you haven't got a relationship with the developer to ask 'Why did you design it like this?' then evaluation is really difficult and often you need a third party to act in this role. It's a business risk to be managed."
Whereas a commercial company may have several layers of assurance for developer identity — an on-boarding process, an identity process, a technical identity process for checking-in source code — establishing developer identity can be far trickier for some open source projects.
"For some of these projects it's a Gmail address. Who wants to bet the farm on the security of someone's Gmail account? There ways around all of these but these are things you have to think about."
"I can audit a company and say 'You have these standards and apply them and yes you have incidents but you manage them well'. How do I do that for a diverse set of developers on their own hardware?"