Why whitehats don't want to help businesses at risk

Why whitehats don't want to help businesses at risk

Summary: Well intended hackers might discover plenty of security vulnerabilities during their travels across the internet, but when businesses sue them or make it hard to pass the information along, it's no wonder that they don't even bother.

SHARE:
TOPICS: Security
2

Any smart business has a process in place to take its customers' feedback seriously in order to help it grow and offer better services. But when it comes to security — arguably an area that many organisations could do with a helping hand — no one is going to want to help because of how hard companies have made it.

There's a story about how most elusive discoveries are made; rather than the hyped-up eureka moment that many believe occurs, many discoveries are made by someone tinkering with something, not looking for a development, and realising that something is strange. In the same way, many security flaws aren't found by a guy who is hell-bent on breaking into a business and ruining its life, and if they were, they wouldn't inform the organisation of exactly where their issues are.

Yet, when a good Samaritan does that, they're left feeling like they're going to go to jail. One example is New Zealand's recent case with the Ministry of Social Development. Freelance journalist and blogger Keith Ng pointed out glaring security oversights, but it's obvious that despite only sharing thid information with officials and the privacy commissioner, and giving his personal guarantee that he would not pass the information to anyone else, Ng was concerned about what action might be taken against him.

In the end, Ng decided to lawyer up, and while the ministry eventually said that it would not be pursuing any legal action, one needs to ask whether it was because his position as a journalist would have created a public relations nightmare.

Looking further back, Patrick Webster attempted to warn First State Super that it had issues with its system, an act that was praised as "the right thing" to have done at the time. But later, and without any notice, he subsequently had the local police on his door step. No good deed goes unpunished.

People like Webster, who are professional security consultants or penetration testers, have a knack for discovering these sort of flaws, even in their daily browsing activities (you can't, after all, just turn off the ability to spot bad practices outside of work hours). However, I am yet to meet a single professional who will go out of their way to inform a company if they're not already doing work for them. On the surface, it might look like they're passing on a free opportunity to get some business, but the reality is, getting involved in someone else's affairs like that starts to look like extortion.

From the vulnerable business' point of view, a skilled hacker has been sniffing around their systems and just "happened" to find a flaw. What's that? You also just happen to offer the same services to test for weaknesses and we should hire you? You happen to run a security blog too, where you write about flaws? Oh, and you also talk to the media sometimes. Right.

On the flip side, there are businesses that are switched on and have realised that enlisting curious hackers to point out their flaws is an extremely cost effective way of getting a sketchy penetration test; sometimes, at no cost. The problem is that it has to be done their way, and the rewards are often non-existent or simply not worth a hacker's while.

Most offer no monetary rewards whatsoever for the reporting of bugs, and also place further restrictions on those interested in testing for bugs, asking reporters to stay silent on the issue until it is resolved.

Despite the catch-all statement that customer security and privacy is of utmost importance, many organisations take staggeringly long times to reproduce even simple errors that, in many cases, security folk already know how to fix and often even send suggested code for. This means that reporters have to hide their achievements away for weeks, and sometimes, with little acknowledgement from the vulnerable organisation.

At the end of the process, the only thing they have to show for their months of effort is their name on the company's security page, if they even have one. To put this into perspective, even Twitter's translators receive more recognition for their efforts in the form of profile badges and achievements.

Facebook's approach is similar when it comes to asking for hackers to delay their disclosure, but under the threat of legal action:

If you give us a reasonable time to respond to your report before making any information public and make a good faith effort to avoid privacy violations, destruction of data and interruption or degradation of our service during your research, we will not bring any lawsuit against you or ask law enforcement to investigate you.

To me, that doesn't read as a reassurance that Facebook won't sue you if you're well-intended. It reads as, "If you don't play by these specific rules, you better lawyer up, because we're coming back for everything".

It at least goes another step by offering bounties, but only says that its minimum reward is US$500. Furthermore, it excludes denial of service vulnerabilities and spam or social engineering techniques from its program. If there is any better use for Facebook by criminals, it's socially engineering users to launch targeted attacks. And what script kiddie wouldn't want to DoS Facebook for the attention? Professional penetration testers can attest that just because they're not required to test a particular system or attack vector, doesn't mean that others won't.

And I can't talk about bounties without at least mentioning Google and its program, which arguably pays among the best rates to hackers. That's great for whitehats, but it simply can't compete against underground markets or even the US Government.

At the end of the day, the ethical hacker is left with this scene: they could report the vulnerability to the company and open themselves up to legal action, little to no reward for their efforts, possible claims of extortion, reputation loss, and embargoes on their own discoveries, all for a warm fuzzy feeling; or they could keep it to themselves and move along.

The problem is, the less scrupulous hackers out there that are selling vulnerabilities and are at the root of the problem for vulnerable businesses; they're banking on the ethical hacker to keep their mouth shut. And in their effort to shoot whatever they happen to catch a glimpse of, friend or foe, businesses are only helping these bigger threats remain undetected.

Topic: Security

Michael Lee

About Michael Lee

A Sydney, Australia-based journalist, Michael Lee covers a gamut of news in the technology space including information security, state Government initiatives, and local startups.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

2 comments
Log in or register to join the discussion
  • Actually, there is a white hacker being held in custody right now in ...

    ... the Netherlands during investigation. This hacker revealed some serious security issues in a hospital, the Groene Hart Ziekenhuis, and was able to access all information of about half a million patients. The government is shocked about this crackdown, but at the same time this legal action against the hacker was adviced by a governmental organisation, the National Cyber Security Center (NCSC). Meanwhile the NCSC is trying to set up a hotline where white hackers can report security issues...

    http://www.nu.nl/internet/2970387/kamer-ontsteld-harde-aanpak-hacker.html
    http://www.nu.nl/internet/2970526/hackers-wantrouwen-meldpunt-overheid.html
    wjosdejong
  • Very Complex

    I too have my own experience with "no good deed goes unpunished," but I see the scenario as painfully complex.

    - For the good of all, we must embrace the voluntary disclosure of "hey, your system has a security flaw."

    - For the protection of the company and the ability to "see" real attacks in our logs, etc., we cannot freely promote unauthorized, undocumented hack attempts at our networks.

    The complexity comes from the harm caused by the unknown; harm from the unknown vulnerability and harm from the unknown non-maliciousness.

    Had a bad guy been poking about the system and realized they made a dumb mistake that could get them caught, they might successfully alert the victim that they "discovered" a security flaw as a way to cover up their ill intentions. The sensitivity of some systems is such that down time is expensive, yet downtime is cheaper than a real hack. If downtime is experienced because of a non-malicious vulnerability scan, it still costs real money and causes real harm. On the other front, if you're receiving multiple hacks and are actively defending them, we might be misled to defend against the non-malicious hack and be penetrated by the real bad guy. Let us also recognize that hack attempts tax our security devices. If several non-malicious hack attempts are made without authorization, we might find that these overload the security devices in the network, effectively causing a DoS attack or worse yet, crashing the appliance and putting it in a state of vulnerability, which leaves the network as good as no firewall until it's reset. The squeaky wheel gets the grease and the nail who sticks up gets nailed down.

    Of course, if we punish those willing to help because of policy, rather than determine their purity, we're left with a world that harms more than the place that was hacked without authorization. An eye for an eye leaves the whole world blind.

    We see this in the non-digital world too, when a random stranger stops to help the person who was in a bad accident, only to be sued by the person later.

    In the end, it's a problem with the world, not just technology.

    You summed it up well... No good deed goes unpunished. It is our choices that separate us from others.
    ct2193@...