X
Tech

Lawsuits threaten infosec research — just when we need it most

Security researchers and reporters have something in common: both hold the powerful accountable. But doing so has painted a target on their backs — and looming threats of legal action and lawsuits have many concerned.
Written by Zack Whittaker, Contributor
final-hedshot-story-chilling.jpg

NEW YORK, NY -- This year, two security reporters and one researcher will fight for their professional lives in court.

Steve Ragan, senior staff writer at tech news site CSO, and Dan Goodin, security editor at Ars Technica, were last year named defendants in two separate lawsuits. The cases are different, but they have a common theme: they are being sued by the companies covered in articles they wrote.

Although lawsuits targeting reporters, particularly on the security beat, are rare, legal threats are an occupational hazard that reporters are all too aware of -- from companies threatening to call an editor to demand a correction -- or else -- to a full-blown lawsuit.

But the inevitable aftermath is a "chilling effect." White-hat hackers and security researchers hesitate to report vulnerabilities and weaknesses to technology firms for fear of facing legal retribution.

With nation state attackers targeting elections and critical national security infrastructure on a near-daily basis, security research is needed more than ever.

The most recent act of legal aggression comes from Keeper Security, which filed a civil defamation suit against Goodin in December for allegedly making "false and misleading statements" about the security company's flagship password manager software. The software contained a vulnerability the company confirmed existed, which allowed "any website to steal any password," according to Tavis Ormandy, a security researcher at Google's Project Zero, who originally filed the bug report and informed Keeper of the flaw. Ormandy said the bug posed the same threat as one he reported some 16 months earlier, and only released details of his report once Keeper remediated the bug.

Goodin was one of the first to report the story. After publication, his story was revised to clarify that the bug only affected the company's browser extension -- which is bundled by default with the downloadable app -- and not the app itself.

But Keeper disputed those particular claims in its complaint, filed days after Goodin's article was posted.

When Goodin's lawyers responded to the complaint earlier this month, they argued that the article was "substantially true," because the article stated that the company "had a security vulnerability that it was forced to fix," and it was "the same (or virtually identical) kind of flaw" as the bug that Ormandy found 16 months earlier.

Goodin's lawyers asked the court to dismiss the complaint, arguing that it "represents precisely the kind of technical hairsplitting the courts refuse to indulge when evaluating substantial truth."

Fortunately absent from the complaint's list of defendant's was Ormandy, who first made the claims echoed by Goodin in his original bug report.

Chris Vickery, a well-known security researcher and data breach hunter, wasn't so lucky.

He was also named alongside CSO's Ragan in a separate defamation lawsuit.

Last year, Vickery approached Ragan with a story: A little-known company, River City Media (RCM), left its servers online open and exposed without a password, revealing a "massive, illegal spam operation." Vickery, working with Ragan, a highly respected veteran security reporter, provided the data to Spamhaus, a spam intelligence organization, which "concluded that RCM has been using illegal IP hijacking techniques during some of their campaigns."

It was a massive story. But one that isn't widely known is that, just two weeks later, RCM filed a civil lawsuit against both Ragan and his employer IDG, which owns CSO, as well as Vickery and his then-employer Kromtech, calling Vickery a "vigilante black-hat hacker."

The cases are still pending. Both Keeper and RCM demand financial damages if they win.

Vickery, now the director of cyber risk research at UpGuard, defended his work, telling ZDNet in an interview some weeks later that RCM "made up a lot of things I'm certain they can't prove."

When asked if there would be a negative impact on publishing security research if the lawsuit is successful, Vickery said: "If they can make up and fabricate events and have a jury believe them -- well that's going to have a far greater effect than chilling researchers and data breach reporting."

That "chilling effect" was specifically mentioned by Goodin's lawyers in their motion to dismiss his case.

"The technology community is vigilant in policing such vulnerabilities, and permitting this case to go forward would have a profoundly chilling effect on cybersecurity research and reporting generally," his lawyers said.

In other words, hackers and security researchers on the right side of the law are more likely to self-censor if they think they may be sued, or others are successfully sued -- for doing their jobs.

In the last year alone, several security researchers have revealed that they have been the target of legal threats.

PwC sent cease and desist letters to a researcher who found a critical flaw in one of its security products. An executive at hacked dating site Ashley Madison threatened to sue one reporter for publishing leaked emails that claimed the company hacked competitors. One security researcher submitted a report through drone maker DJI's bug bounty program about a bug which exposed its encryption keys online, but the company instead threatened to sue under US hacking laws. And a smart lock maker threatened to sue an IOActive security researcher for violating intellectual property laws for finding a serious bug in one of its flagship products.

These represent only a fraction of cases where legal threats resulted from security research.

But in some cases, researchers have opted to self-censor rather than publish their work for fear of legal repercussions. In the last year, ZDNet did not publish three security stories after researchers' abandoned their work, fearing legal threats.

We wanted to see how the security landscape looked today. We spoke to eleven white-hat hackers and security researchers last week. Not one person said they had no concerns from the threat of lawsuits or legal action.

"I think that concern comes with the territory in security research," said Will Strafach, a security researcher with a focus on mobile security, who has disclosed dozens of vulnerabilities in the last few years.

Emily Crose, a former NSA hacker, said there was a "lag" between the law and the technology.

"We have great people working in the field to help shape the law, but yeah, it's still a treacherous topic for many of us," said Crose.

"Technology is fundamentally changing the way we live our lives, and new doors are being opened that lower the bar to entry on work like infosec research," she said. "If you pair that with things like intellectual property law, you end up with a situation where a lot of people doing legitimate research end up in a sort of legal gray area."

But she said that the legality gap is "only getting worse" between what is considered lawful and not, and that she would have to weigh "a lot of factors" on whether she would take on a project if there could be possible threats of legal action.

"It's all a calculation, you know?" she said.

Some have actively walked away from the vulnerability disclosure space and pivoted into other areas of infosec altogether, exhausted from the pressure of possible lawsuits.

Sarah Jamie Lewis, an anonymity and privacy researcher, said she is "pretty much constantly" concerned about legal threats, and has stopped bug hunting in favor of building new technologies.

"I've been much more conservative this year," she said, referring to her workload. "I've not taken on another project that's involved vulnerability finding in products because the thought of any more legal threats doesn't appeal to me."

"The main direction I'm taking my research now is constructive -- building projects and going after grants to fund new technologies rather than fixing existing systems, because existing systems often come with corporate baggage and trying to report vulnerabilities and working with those systems -- it's not just the legal risk but the amount of effort required," she said.

"Facing legal action is one of those things where it's just not worth it anymore," she said.

Security researcher BlackRoomSec said she was "holding back" her research because she refuses to "to do the work and then have to defend myself in court."

"I also would prefer companies take responsibility for their actions and not try to hide it," she said, naming Equifax and Uber as recent examples. "Placing blame on outsiders, after the fact, when it was their own employees who were at fault for not properly sanitizing user data or securing their products is ludicrous."

"I feel that we are going the extra mile to help companies find bugs which were not found during the initial testing they did and to turn around and threaten us with legal action is unfair," she added.

But she said she would report a severe vulnerability -- even if it meant facing a lawsuit.

"I'd have to deal with the fallout," she said, bluntly.

Daniel Gallagher also worries about legal threats. He said that smaller companies without dedicated security resources can often respond in a "defensive" way.

Gallagher makes a name for himself by analyzing malware. He's worked with several other prominent security researchers to battle botnets and fight malware infections, and he has built up a repertoire of law enforcement contacts that can vouch for his work. But it's the private industry that causes him the most worry.

"Past experience has shown that reaching out to a small company out of the blue that likely has no infosec person on the payroll can result in them getting defensive." In those cases, he said, he opts to alert the proper law enforcement agency "so the problem is not ignored."

One independent researcher, who asked not to be named, said that they will "simply post details of a flaw anonymously online."

"It happens quite often," he said with disappointment. "I reported several vulnerabilities and was threatened via email," he said. Those kinds of experiences led him to resort to "less favorable channels of disclosure out of fear the company will sue if they have, or have had a rocky past with folks trying to contact them."

Another former independent researcher spoke of two cases where he conducted security research in his early career -- both of which drew legal attention.

"Most cases resolve with nothing serious once you manage to explain who you are and why you do what you do," he said. "In some cases yes, it goes as far as a lawsuit and the outcome mostly depends on the country and the political influence of the target."

He said he "never did it for the money," but he's thankful now that bug bounty programs exist to draw a clearer "distinctinction between a security breach and a free pen-test."

Though the number of hobbyist hackers and independent researchers is arguably growing in size, many working for larger security companies have in-house counsel backing their work.

But Amit Serper, principal security researcher at Cybereason, and Jake Williams, founder of Rendition Infosec, say the risks still exist even for them.

"As long as I work there, I have legal on my side," said Serper. Just two months earlier, he received several cease and desist letter from TargetingEdge in an effort to prevent him from publishing research on adware built by the company.

"We decided to publish anyway because we're sick of shady 'adware' companies and their threats," he told ZDNet at the time.

Jake Williams runs his own cybersecurity firm, and gets paid to conduct penetration testing and emulate real-world attacks to test network defenses for corporate customers. When we spoke, he said that his firm has faced a "few brushes with legal threats from vendors over the years."

"We definitely consider actual lawsuits as a threat and factor that into our work," he said.

"In most cases my clients own the vulnerabilities that we discover in their products," he explained. "But in the few cases that we own them we're very careful about how we disclose them, especially given lawsuits like Keeper's."

Leigh Honeywell, a technology fellow at the American Civil Liberties Union, said she too worries of a chilling effect in the wake of recent lawsuits.

"Which is why it's so important to point out how irresponsible, pathetic, and amateurish lawsuits like Keeper's are, and to hold the executives, employees, and investors in companies who pull that kind of stunt accountable for their nonsense," she said.

It's no surprise, given the legal risk to researchers, that few wanted to speak on the record about the threats they had faced. But one researcher did -- and spoke frankly of when he was sued for slander as a result of findings in his early security work.

Johnny Xmas may be best known for releasing the master key for luggage locks used by the Transportation Security Administration two years ago. In his early days, his hobby-hacking led him to uncover a flaw in the magnetic stripe found in his university's student ID cards. Each student's information was easily accessible -- including their Social Security number -- and all he needed was a student's card, "which everyone lost like once a month," he said.

"I thought I'd let everyone in the school know about the issue the same way I let them know about my band's shows: by wallpapering the hallways in flyers," he explained in an email.

"The school of course caught wind, verified the issue, and attempted to sever their contract with the ID company citing a breach in a clause where the company agreed to secure the personal identifiable information," he said.

That company, which he did not name, was "livid at the loss of a huge client" and sued him directly for slander -- a claim he denies, "as my claims were verifiably true," he said.

Xmas was also expelled from his school.

He was later forced to settle out of court after the company, in his words, "dragged the case out until I ran out of funds to pay my counsel." He denied ever committing slander.

"I did not end up having the fine fully paid off until my early 30s, likely effectively ruining my chances for a comfortable retirement," he said.

Xmas said he can tell his story at a hotel party and it's "guaranteed to start the usual one-upping loop of war stories that would go until the sun comes up."

"In the same way that you couldn't call yourself a 'hacker' pre-2010 unless you had the police record to prove it, you can't call yourself a 'security researcher' without showing the legal fees associated with the ocean of nonsense you now have to run past your attorney," he explained.

Security researchers, like reporters, hold the powerful -- people, companies and governments -- accountable for their failures, whether deliberate or inadvertent.

But doing so paints a target on their backs.

"The selfish part of me is, of course, terrified of going to prison or being sued into oblivion again," he said. "But then the altruistic part of me is still inside there somewhere yelling Spock's famous line: 'The needs of the many outweigh the needs of the few!'"

"There's something inside us that makes us do what we believe is truly right, no matter the cost," he said.

The trick, he said, is to "always bear in mind that what is 'right' is always subjective, and the costs may greatly exceed your initial estimates."

But with legal cases looming, that notion of what is "right" may soon get redrawn by the courts. For now, all reporters and researchers can do is watch the cases of their fellow colleagues unfold.

Editorial standards