X
More Topics

COVID-19 'tracking' mania, 5G death beams, Nigerian scammers, and gullibility

"Since gullibility is unobservable, the best strategy is to get those who possess this quality to self-identify."
Written by Adrian Kingsley-Hughes, Senior Contributing Editor

One of the legacies of 2020 is that I am still getting emails and comments about my coverage of Apple and Google working together to build a framework for a private and secure COVID-19 exposure notification system to their respective mobile platforms.

It's a gift that keeps on giving.

Being curious about what's behind this, I've made a concerted effort to engage with some of the people who have been in touch with me. After all, I think that it's good to be curious about things.

First and foremost, there's a very palpable distrust of big tech firms, or at least some aspects of big tech. Over the past few months, I've come across several individuals who expressed great distrust of Facebook, yet at the same time seem perfectly comfortable piling personal information into random quizzes in exchange for a horoscope or to find out "what kind of dog" they are.

The distrust is made all the more real by the fact that people feel there's no way to escape it. They need a smartphone or computer, and that means having to make space in their lives for big tech, no matter how uncomfortable that may be.

And this distrust is only going to increase thanks to recent events and the volatile rhetoric that will no doubt follow.

Must read: Best security keys in 2021: Hardware-based two-factor authentication for online protection

Back in June of last year, I criticized the way that Apple and Google rolled out the COVID-19 notification framework. While the companies did release a joint statement, it was vague and jargon-ladened.

And the way the framework was presented to users in Android and iOS is confusing. The explanations are full of talk of "random IDs" and "Bluetooth" and "authorized apps," and unless the user has an authorized COVID-19 app installed, there are a whole bunch of greyed out settings visible, all ready to be misunderstood, cause confusion, and be taken out of context.

And six months on, while some work has been done to simplify the mechanisms, not much has changed in terms of clarity for end users.

This has helped create a space where misinformation and disinformation can flourish. Invariably, when I push someone for why they believe that there's something nefarious about the COVID-19 exposure notification system, I'm directed to visit a website, Facebook post, or (to a lesser extent) YouTube video to do some "research."

Note: To its credit, YouTube seems to be doing a better job of deleting misinformation as of late.

It also becomes hard to unravel one concern/conspiracy from another. The COVID-19 exposure framework is now part of a bigger coronavirus is chemtrails transformed into viruses using 5G death beams to force us all to be injected with vaccines that contain trackers so that certain billionaires can ignite the oxygen in the atmosphere for, well, some reason.

I'm not kidding.

There's also a lot of overt antisemitism.

It's difficult to imagine that people would believe this, but using outlandish claims to sort and filter people is not a new thing on the internet.

I've noticed that the people who get in touch seem to break down into four broad categories:

  • People who are genuinely concerned and worried that "something is going on"
  • Attention-seekers who believe they have the answer and want to feel special
  • People who appear to feel angry, sad, or aggrieved, and who are looking for an outlet for their feelings
  • ]People who are out to make money

This leads to a question that I've seen many ask: how come there are people who believe this crazy stuff?

There's a fascinating paper written by Microsoft researcher Cormac Herley about Nigerian scammers, and why the emails made over the top claims, even going so far as to say they were from Nigeria, hence bringing up the possibility that it was a Nigerian scam.

"Since gullibility is unobservable," writes Herley, "the best strategy is to get those who possess this quality to self-identify."

Herley goes on to describe how this is used as a filter.

"An email with tales of fabulous amounts of money and West African corruption will strike all but the most gullible as bizarre. It will be recognized and ignored by anyone who has been using the Internet long enough to have seen it several times. It will be figured out by anyone savvy enough to use a search engine and follow up on the auto-complete suggestions. It won't be pursued by anyone who consults sensible family or fiends[sic], or who reads any of the advice banks and money transfer agencies make available. Those who remain are the scammers' ideal targets. They represent a tiny subset of the overall population."

"Failure to repel all but a tiny fraction of non-viable users will make the scheme unprofitable."

The paper is a fascinating read and goes a long way to help explain how disinformation spreads, and why it's so hard to convince people that they are not right.

While it would be an impossible task to build and design systems that leave no room for intentional or unintentional misinformation, I do believe that companies need to do a better job of taking misinformation and disinformation into account. This starts with clear messaging and making sure that users are given clear opt-outs.

I can understand how having a new setting pop up on hundreds of millions of iPhone and Android smartphones would cause some level of concern and confusion, and a space for disinformation to grow. Putting aside the discussion of whether tech companies were acting for the greater good, care needs to be taken to not create spaces for FUD -- Fear, Uncertainty, and Doubt -- to flourish.

Because it will be weaponized. 

Editorial standards