The only way to catch those errors is to put the software in question under a microscope, checking for buffer overflows, unexpected responses to input and unpredictable interactions with other software, as well as manually auditing source code.
As I've also said before, that kind of testing is not popular among software development companies. Effective security regimes are slow, expensive and resource-intensive. More importantly, they add very little to the bottom line; security testing simply isn't "sexy" from a marketing perspective. Potential customers have very little basis for distinguishing between well-tested, poorly tested and untested software until well after the fact.
Public awareness of security issues is on the rise, however, and vendors are starting to feel the pressure to demonstrate that they take those issues seriously. In response, they have increasingly begun to focus on an unfortunate type of pseudo test—the "hacker challenge."
By making a test platform with their product available to the public and granting permission for anyone to attempt an attack, vendors can theoretically enlist the hacker community to help dig out vulnerabilities.
For the vendor, that is a win-win scenario. If a flaw is found, the company can get to work fixing it, and display its conscientiousness and responsiveness. If no flaw is found, the firm can declare its product unbeaten in the face of realistic attacks.
By evoking the "hacker mystique" and competitive impulses, that so-called "testing" regime can generate the marketing sex appeal that more traditional methods lack. Perhaps most importantly, the process is extremely cheap—almost all of the labor is essentially free.
Unfortunately, by themselves those challenges are an extremely poor testing technique. Without access to source code, development history, and other proprietary resources, outside volunteers may take months or even years to locate vulnerabilities that an insider might find in hours.
That is particularly true in open, widely publicized competitions, which draw large numbers of unskilled participants. Moreover, a skilled attacker who does find a weakness may well prefer to keep it a secret for his own use, rather than speak up and see it patched.
Hacker challenges can prove to be a valuable supplement to a well-developed testing regime. Many problems can only be exposed by hard use under realistic conditions, and outsiders often bring fresh perspectives and approaches that may be missing in an insular development environment. More often than not, though, they are essentially publicity stunts that do little more than promote false confidence.
David Raikow is technology editor of Sm@rt Partner.