The good guy must protect every possible hole. The bad guy must find just one.
Which brings us to the latest Android exploit, the Google response, and the real question we should be asking, which is whether open source software is inherently more secure, or less secure, than proprietary.
Open source code is visible. Proprietary code is, to legal users, invisible, although it is always made visible in the process of creating a vulnerability.
This means that open source code can always be seen, by both good guys and bad guys. Proprietary code can be seen only by committed bad guys, and some trusted good guys.
Open source code can be seen by uncommitted bad guys, by guys who might turn bad if they see code they can exploit. Proprietary code cannot be seen by these people.
The question becomes, are these semi-bad guys tough enough to pose an enormous threat, and is this threat offset by the fact that semi-good guys can address it?
It's the role of semi-good guys I want to emphasize. These are programmers who are not security experts, who may be adding to the code in some other way, but who since they see the code have the opportunity to both find and patch potential exploits.
I liken them to a neighborhood watch, like the one which protected my street for many years. A local police officer marveled at how our crime rate was 30% less than that of the blocks on either side. We watch each other, I said.
Proprietary programs don't have many semi-good guys. The code is invisible except to those authorized to see it and those who break the law by looking at it.
Which scenario is more secure?
These questions are meaningful far beyond Google Android, of course. They really go to the heart of our attitude about all security issues.
Most security advocates feel knowledge about security issues should be restricted. This idea is deep in their culture. It appears everywhere, far beyond software, and has been a big part of the news in our time.
When security experts try to obscure pictures from Google Earth, they are manifesting this attitude. When they refuse to share their no-fly lists, they are manifesting this attitude.
Open source challenges this attitude directly. It says bad guys are an aberration, that goodness can generally protect itself, and that cooperation, in the end, breeds more security than mistrust.
In this way the open source paradigm goes far beyond software. It may be why many security experts fight it so hard inside software. If their assumptions are wrong there, could they be wrong everywhere?