It always struck me as a simple deal: there are benefits to openly participating in the security research community - peer recognition and job opportunities. There is also a cost of doing it as a hobby - loss of potential income in other pursuits. After having made a name for themselves, some people decide that the benefits no longer offset the costs - and stop spending their time on non-commercial projects. Easy, right?
Well, this is not what's on the minds of several of my respected peers. Somewhere in 2009, Alex Sotirov, Charlie Miller, and Dino Dai Zovi announced that there will be no more free bugs; in Charlie's own words:
"As long as folks continue to give bugs to companies for free, the companies will never appreciate (or reward) the effort. So I encourage you all to stop the insanity and stop giving away your hard work."
The three researchers did not feel adequately compensated for their (unsolicited) research, and opted not to disclose this information to vendors or the public - but continued the work in private, and sometimes boasted about the inherently unverifiable, secret finds.
Is this a good strategy? I think it is important to realize that most vendors, being driven by commercial incentives, spend exactly as much on security engineering as they think is appropriate - and this is influenced chiefly by external factors: PR issues, contractual obligations, regulatory risks. Full disclosure puts many of the poor performers under intense public scrutiny, and may force them to try harder and hire security talent (that's you!).
Exactly because of this unwanted pressure, they probably do not inherently benefit from the unsolicited services, and will not work with you to nourish them: if you "threaten" them by promising to essentially stop being a PR problem (unless compensated) - well, don’t be surprised if they do not call back soon with a job offer.
Having said that, there is an interesting way one could make this work: the "pay us or else..." approach - where the "else" part may be implied to mean:
Selling the information to unnamed third parties, to use it as they see fit (with potential consequences to the vendor's customers),
Shaming the vendor in public to suggest negligence ("company X obviously values customer safety well below our $10,000 asking price"),
Simply tellling the world without giving the vendor a chance to respond.
There's only one problem: I think these tricks are extremely sleazy. There are good and rather uncontroversial reasons why disclosing true information about an individual is often legal, but engaging in blackmail never is; the parallels here are really easy to draw.
This is why I am disappointed by the news of VUPEN apparently adopting this very strategy (full article); and equally disappointed by how few people called it out:
"French security services provider VUPEN claims to have discovered two critical security vulnerabilities in the recently released Office 2010 – but has passed information on the vulnerabilities and advice on mitigation to its own customers only. For now, the company does not intend to fill Microsoft in on the details, as they consider the quid pro quo – a mention in the credits in the security bulletin – inadequate.
'Why should security services providers give away for free information aimed at making paid-for software more secure?' asked [VUPEN CEO] Bekrar."
Here's the thing: security researchers don't have to give any information away for free; but if you need to resort to arm-twisting tactics to sell a service, you have some serious soul searching to do.