X
Tech

Are vulnerable times responsible times?

Is there a fine line between conscientious bug-finding and ransom...?
Written by Patrick Gray, Contributor

Is there a fine line between conscientious bug-finding and ransom...?

Security professionals say they're making computing safer, but are they doing more harm than good? Patrick Gray talks to independent security researchers, a controversial operator and Microsoft's chief security engineer to find out.

The internet is one big, bad neighbourhood. Try connecting a freshly loaded Windows system - no patches - to the internet. How long would it last? 10 seconds? Maybe 20?

Then imagine a nightmare scenario. Your computer, with all patches loaded, is attacked by a hacker who possesses vulnerability information not in the public domain. They know a way in and there's no way to stop them; no patch for the security hole because your software supplier doesn't know it exists.

This is why software companies want security bug catchers to tell them when they find a flaw. They can write a patch and distribute it to customers before malicious hackers can attack systems through the weakness. But one such researcher, Dave Aitel, doesn't want to do that.

Aitel is a man with a reputation. In private, many security researchers say he's unethical; a rogue operator placing computer users across the globe at risk. Others say he's a gun researcher, protecting his clients in an era of irresponsible security practices among large software companies.

Aitel's company, Immunity Inc, raised more than a few eyebrows in January when it released details of a security vulnerability in Apple's operating system software to the public without giving the software company prior notification. The result? Apple customers were aware of a security flaw in their software, and had no way to fix it.

But the very same vulnerability details were shared with Immunity's clients as far back as June, 2004. Why?

Aitel explained: "Immunity's policy on vulnerability information does not include vendor notification."

Aitel has a habit of answering the questions he wishes you'd asked, not the ones that you actually did.

But he offers this: the way he sees it, he's providing his customers with information about vulnerabilities in greater detail than the vendors, and that's a service worth paying for.

$100,000 will get you into Aitel's Vulnerability Sharing Club; $50,000 for smaller companies. Any company that joins must sign a non-disclosure agreement, so information about vulnerabilities in popular software doesn't fall into the wrong hands.

Needless to say, some vendors are less than impressed. George Stathakopoulos, Microsoft's chief security engineer, wouldn't talk about any specific company, but says responsible vulnerability disclosure is vital.

"Any individual or organisation that behaves in a way that potentially puts... customers at risk is a huge concern," he says. "We continue to urge security researchers to disclose vulnerability information responsibly and allow customers time to deploy updates so they do not aid criminals in their attempt to take advantage of software vulnerabilities."

Greg Shipley, chief technology officer of Chicago-based security outfit Neohapsis, holds back judgement but says the existence of private vulnerability sharing clubs like Aitel's raise some serious ethical questions.

"When you start talking about advanced release times, publishing exploit code, and introducing a mercenary angle to what is essentially... a public quality assurance process, you start entering some really murky waters," he says.

The trade in information that allows the buyer to easily penetrate computer networks is dangerous, Shipley argues. "If it simply boils down to the highest bidder, we're in for some real problems."

"If anyone with a few dollars can afford to 'buy into' such an information ring and get access to tools that blow past most corporate defences, what's to stop some truly malicious folks from using that information for truly evil purposes?" Shipley asks.

"Zero-day", or unpublished security vulnerabilities are becoming the "tactical nukes" of cyberspace, Shipley argues; the Holy Grail. He doesn't want to see them falling into the wrong hands.

But Ken Pfeil, chief security officer at Capital IQ, a web-based provider of financial data services, isn't alarmed. Services offered by companies like Immunity are ethical, "as long as they hold the information to themselves and sign the members to a non-disclosure agreement". Still, he does acknowledge the sensitive information may "leak", but that's not Aitel's fault, he says. Vulnerability information leaks have sprung from other sources, like the Carnegie Mellon University-based research outfit CERT, which receives US government funding.

"No one holds CERT accountable when a member leaks information, so why would this be any different?" Pfeil asks.

Perhaps some in the security industry are merely annoyed Aitel has the gumption to turn vulnerabilities into cash in such a controversial way. Having access to vulnerability information if you're a researcher seems to be a lesser sin in the eyes of many. It's ironic, considering some prominent researchers have been known to dabble in illegal activity.

Pfeil has used Aitel's services in the past, and is a satisfied customer. "I hired him to do a code review at our company last year. He did a very good job," he says.

While researching any article about Immunity Inc, one thing became very clear: Aitel is popular. Even some of his biggest critics say he's funny and affable; one former colleague describes him as "hard not to like".

Aitel spent six years working with the National Security Agency in the US before moving to the private sector. Ron Gula, the creator of Dragon IDS and co-founder of Tenable security in the US, also worked for the NSA. Gula, a competitor of sorts to Aitel, shies away from vulnerability research. It's expensive, time consuming and not worth the hassle, he says.

But Gula has also benefited financially from finding vulnerabilities in software inadvertently, simply through the publicity. He knows finding bugs pays the bills, even when disclosure is handled differently. It's proof that the rational rules of commerce, and perhaps ethics as a knock-on effect, don't apply in the bug hunting game.

"The few vulnerabilities we've inadvertently discovered got Tenable on CNN and sent a lot of business our way," Gula says.

Even when a vulnerability was discovered in Dragon IDS, Gula said the negative publicity actually helped boost sales. "When Dragon first started, there was a lame exploit for it. This sent a lot of business my way... [people] conclude if it is new and worth hacking, it must be good."

There is a demand for detailed information about security vulnerabilities out there, a market vacuum, and Aitel's moved to fill it.

"Software customers should require vendors to provide full, current, and accurate disclosure of every security vulnerability they know about, to their customers," he says.

"While the open source community generally follows this policy, closed source vendors often do not. Educated customers, particularly in the financial community, are now requiring independent third party assessments of software before they purchase it, and are beginning to push back on software vendors with regards to the information they get from them about vulnerabilities."

But Microsoft's Stathakopoulos says his company doesn't want to bury vulnerability information, it just wants to slow down its release. "What worries me is the increase in releasing proof of concept code," he says. "I would like to see the industry self-regulating and delaying the release of POC for at least 90 days."

Proof of concept code exploits a security vulnerability, but doesn't grant access to a vulnerable machine; it's a test. However, armed with a basic POC anyone with some basic programming skills can alter the code and turn it into a fully fledged exploit.

Some see the release of POC as a way to force software vendors to produce working fixes. If millions of users have the ability to test a security patch with the POC, then the vendor had better make it a good fix.

If there's one thing Stathakopoulos is getting very sick of, it's having to drop everything - including holidays or social plans - when a security researcher slaps an undisclosed vulnerability in a Microsoft product onto a public mailing list. "You have to leave whatever you doing to go to work and start the process of releasing a security update," he says.

What if software vendors started paying bug-finders for information about security flaws: would this help or hinder? Shipley has doubts. "There's a fine line between fiscally compensating one for their work, and creating a framework for extortion possibilities," he says. "It's that line that I worry about."

But Aitel notes it's not the "security community" that actually finds most of the bugs. "Vendors typically do pay a fee to people who find bugs in their software; they call that fee their 'salary'," he quips. "Most people finding bugs in a vendor's software are QA (Quality Assurance) engineers who work for the vendor." The public never knows about those bugs because they're fixed before the product ships.

Gula agrees with Shipley. If vendors are obliged to pay for bugs, such a scheme will amount to extortion. "There are millions of unknown vulnerabilities and the software manufactures should not be forced to purchase these. How much are they worth? Who sets this value?" he asks.

So who's to blame for the current state of affairs? Vendors blame irresponsible researchers, and some researchers blame the vendors. While there are bugs being found, researchers will always seek to earn money from them. They'll sell them, or use them for marketing purposes; nothing says "look at me" like a zero-day in Windows.

Until that changes, the security industry will look like the Wild West for a long time to come. For now, it's the users left in the middle.

Editorial standards