X
Tech

"Sanity check" your cybercrime statistics

The difficulty telling fact from fiction in cybercrime news has been getting worse over the past few years. For decision makers, this means a "sanity check" on reported stats should be in your everyday toolkit.
Written by Violet Blue, Contributor

Decision makers who struggle sorting facts from fiction with cybercrime news can be forgiven for thinking that someone, somewhere might be getting... a little dramatic.

Headlines like "Patch your Chrysler vehicle before hackers kill you" might be lulzy for some, but the difficulty telling fact from fiction in cybercrime news has been getting worse over the past few years.

Security fact or fiction? How to 'sanity check' cybercime news

More and more cybercrime surveys are mislabeled as studies, security company PR gets reported as news, reliable stats on cybercrime are elusive, and it's almost impossible to tell realistic threats from headline trends.

This isn't a surprise to anyone who's watched cybercrime reporting go from tech blogs to prime time's spotlight. But when misreported research gets into circulation and begins to influence opinion, it could lead to the misallocation of precious resources, budget, manpower -- and can affect leadership confidence. With all eyes on the threat of a breach (or worse), right now decision makers literally can't afford to make mistakes with risk assessment.

Cybercrime numbers are tough to trust even from highly-regarded sources. Take for example the FTC's identity theft estimates. The FTC estimated Identity theft at $47 billion in 2004, $15.6 billion in 2006 and $54 billion in 2008. In the paper "Sex, Lies and Cyber-Crime Surveys" Microsoft researchers Dinei Florencio and Cormac Herley concluded that "Either there was a precipitous drop in 2006, or all of the estimates are extremely noisy."

By saying the estimates are noisy, the researchers are referring to a signal-to-noise ratio where the signal -- accurate cybercrime numbers -- is "overwhelmed with the noise of misinformation."

The 2012 Microsoft research sounds like it was written this week, citing headlines blaring that cybercrime "has doubled" ... or declined.

There are many complex issues that keep surveys, reports and studies from being accurate, aside from the fact that cybercrime news and cybercrime clickbait are nearly indistinguishable. Companies have a hard time knowing what was stolen -- was it a breach or a blunder? -- and in the absence of strong, well-enforced federal disclosure laws, companies (and government agencies) misreport to avoid embarrassment and legal issues. Also, lest we forget, cybercrime's black market is a covert one.

That's just part of the problem.

Add in a sector rabidly competing for attention (infosec) at a time when cybercrime compels unheard-of spending for security, then compound this mess with a news cycle struggling to pull in as much revenue as possible. Sorting the difference between news and spin becomes a pretty tricky thing to pull off.

The 2014 The RAND report "Markets for Cybercrime Tools and Stolen Data" saw RAND -- refreshingly -- admitting it had difficulty accessing street values and verifying cost of exploit kits and zero days due to the nature of the illicit market, as well as its law enforcement sources' reluctance to divulge sensitive information.

Cybercrime's own distortion field may also have a bit to do with the fact that some of the companies issuing reports -- namely, ones that sell cybercrime prevention and detection software -- are stakeholders in cybercrime's reputation as a growth industry.

One well-known example of fudging was the 2009 report by the Center for Strategic and International Studies, which estimated hacking costs to the global economy at $1 trillion. President Barack Obama, various intelligence officials, and members of Congress have cited this number when pressing for legislation on cybercrime protection.

International Business Times reported:

Turns out that number was a massive exaggeration by McAfee, a software security branch of Intel that works closely with the U.S. government at the local, state and federal level.

A new study by CSIS found numerous flaws in the methodology of the 2009 study and stated that a specific number would be much more difficult to calculate.

The report, still done in partnership with McAfee, produced numbers that varied so widely it still raised an estimated one trillion eyebrows when it hit the press, though their $100 billion - $400 billion range was still a fraction of the 2009 FUD sideshow. They republished the number in 2011, but the incorrect one still circulates. One of the study team told The Economist that finding accurate data was such a problem that the team had "joked about publishing the findings along with an online random-number generator that readers could click on until it produced an estimate to their liking."

Despite all this, most of the numbers we see in day-to-day cybersecurity reporting are from computer security firms themselves, and if their PR departments are worth anything, those numbers are fluffed, hyped and soundbite-ready.

As a result, budgets get misspent, resources get under- or over-utilized, and all that expensive threat intelligence everyone's collecting just gathers dust.

"Sanity check" your cybercrime news

For decision makers charting cybercrime headline hysteria, the only option is to double-down on recognizing signs of BS in news about costs, losses, or threats.

It's crucial to do a "sanity check" when the news is based on claims in a survey, report, or study.

One thing to look out for is surveys, and too many writers mislabel surveys as studies. Look for the source: Is what you're reading based on a survey or a study -- or just a report issued by a company?

Whose input is the information based on: A company, a company's clients, or a sampling of the general population?

For decades, much of the information we have on cybercrime profits and losses has been derived from surveys, where representative sampling skews information that's already challenged by being based on unverified, self-reported numbers.

The "Sex, Lies and Cybercrime Surveys" team said, "Far from being broadly-based estimates of losses across the population, the cybercrime estimates that we have appear to be largely the answers of a handful of people extrapolated to the whole population." They explain, "a single individual who claims $50,000 losses, in an N = 1000 person survey, is all it takes to generate a $10 billion loss over the population."

According to the researchers, the result becomes that one unverified claim of $7,500 in phishing losses "translates into $1.5 billion."

Further, cybercrime surveys suffer from... human nature. "Sex, Lies and Cyber-Crime" explained that for example, "an unsatisfactory online auction experience or dispute with a merchant might easily be conflated with 'online fraud.' The FTC survey which finds an individual respondent trying to report a claimed loss of $999999 'theft of intellectual property' as ID theft is just such an example."

Check to see if the survey is valid. Is the methodology clearly disclosed? What is the sample size? Does this seem reasonable?

Beware of unchecked and unverified statements and statistics. Is the source, and the information, verifiable in any way?

Next, look out for fair reporting. Is the information presented with the other side of the story or counterpoint research? Does the source of the information have a personal stake involved? Is there a personal or professional interest from the reporting source? Is the reporter a "fan" of a person involved in the research, or have a company preference? Does the news outlet tend to favor or decry anyone, or anything?

Threat intelligence is an example of a cybercrime topic trend. Most organizations know they need to 'do' threat intelligence, yet few understand, or can agree on what that means.

Today, there are large numbers of TI vendors and advisory papers (often issued through vendors' marketing departments) that describe extremely different products and services, all under the banner of threat intelligence. These papers sometimes end up as news -- and one quick way of separating the beer from the foam is to look at the problem a news story is presenting, and seeing how the article proposes a solution: If the solution comes from only one company, then you're looking at a company product.

Next, look at the threat being posed in the piece you're reading. Does this threat actually apply to your organization or your customers? Is this an attack that can only happen under highly unusual circumstances? Is it old news? Has the threat or issue been resolved, yet this information is buried in the article? Is the phrasing "are affected" (an active attack) or "could be affected" (a possible attack if you squint and angle your head while looking at the problem)?

The quest for accuracy makes us better at finding strong information with which to make risk assessments that weather even the worst, most ridiculous trends.

And right now, if you ask me, things are getting pretty ridiculous.

These companies lost your data in 2015's biggest hacks, breaches

See also:

Editorial standards