It is supposed to be the fabric of the ongoing global conversation, the medium of exchange for information, commerce, and ideas. Facebook has been called a "cesspool" of banter, first by human beings, and later by less verifiable entities leveraging Facebook itself for visibility and agitation. And there appears to be evidence that Twitter's selection algorithms for importance and user interest selectively amplify extremist points of view, first from human beings, and then later from accounts with more digits in their handles than a UPS routing number.
Social media has become the stock exchange for public opinion. Investors seeking to capitalize on their association with the most active issues, buy low and sell high. When one of the exchanges makes an effort, even if paltry, to moderate trading, as Twitter did last October, it's called to task for tampering with free enterprise, and with democracy itself. When an exchange vows to stay out of traders' ways, as Facebook has done last year, it's called to task for amplifying the more volatile issues and tampering with democracy itself.
That both sides should have equal volume in the matter, would appear to be by design. If it's anything social media algorithms have succeeded in accomplishing in the last decade, it's having made most every discussion of importance in the global sphere a zero-sum game.
"Paying to increase the reach of political speech," wrote Twitter CEO Jack Dorsey last October, "has significant ramifications that today's democratic infrastructure may not be prepared to handle."
If we believe the studies being conducted in the wake of Britons' changing political attitudes towards the European Union, Americans' sudden willingness to embrace and validate extreme political views, and psychiatric inquiries into the rise in suicides among young people [PDF], the "democratic infrastructure" to which Dorsey referred must be made of a soft, malleable plastic. Other nations, or even random assemblies of enthusiasts in their dorm rooms and basements, can shape this indiscernible paste into whatever kind of society best suits their interests. It would be the stuff of science fiction, except almost all of it now looks somewhat dated.
There will probably be no permanent record, in the distant future long after the forces of nature have reclaimed this planet from humanity, of the span of time when human beings sparked the long-trumpeted "global conversation," then spent most of that time arguing over whose terminology to use. All the electrons will have scattered, leaving any archaeologists pondering the residue of our society to wonder what exactly happened. Evidently, there was a catalyst, either of something brilliant or something catastrophic, though it will have left no mark.
If the striving for social status is to be used as a major explanatory variable for social behavior then it becomes particularly important to reach clearer understanding of the nature of social status itself. We need a theory of social status explaining how the various members' relative rank and status is determined in social groups: A theory explaining what makes social groups grant higher social status to some of their members, and what makes these members themselves seek higher social status within the group.
-John C. Harsanyi, University of California at Berkeley
A Bargaining Model for Social Status in Informal Groups and Formal Organizations, 1966
"We don't have the strongest reputation on privacy right now," admitted Facebook CEO Mark Zuckerberg at his company's conference last April, as he was rolling out yet another new vision for Facebook's infrastructure.
Typically, social media has been the world's most effective system of disseminating hyperbole, so Zuckerberg's statement on stage that day was comparably extraordinary. Facebook's privacy incidents over the years have been so numerous and so severe, ZDNet has already established a kind of hall of fame for them. At the summit of this list is an incident that may literally have changed the world, covered in depth by ZDNet's Charlie Osborne: The policy-breaking sharing of at least 87 million personal records of Facebook users worldwide, with the political operations group Cambridge Analytica. As ZDNet's Zack Whittaker reported, this firm has been linked with disinformation campaigns responsible for stirring political activism favoring the United Kingdom's exit from the European Union, and favoring the Republican candidate in the 2016 presidential campaign.
In February 2019, a UK Parliament select committee laid much of the blame for this illicit connection, and the extraordinary aftermaths that have followed, at the feet of Mark Zuckerberg. As the committee's report found [PDF]:
The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.
Facebook, according to recent statistics compiled by Web analytics firm StatCounter, remains responsible for generating about two-thirds of the world's social media-related Web activity. Its millions of users (many of them real) find themselves in a situation that would have astounded Ray Bradbury. The powers that would resuscitate the Soviet Empire were found by intelligence services to have conducted a disinformation scheme targeting the Western world, especially the UK and the US. The scheme's primary tools were Facebook, Instagram (part of Facebook), Twitter, Google Plus (now defunct), and YouTube (part of Google).
The apparent aims of these powers, directed by Russia, did indeed come to fruition, and spectacularly so. Citizens of the UK voted in favor of their country exiting the European Union. A reality show host with no public service experience was elected President of the United States. And a comedian who had the good fortune of portraying on television the President of Ukraine -- an adjacent country whose territory these powers would like to see annexed -- was elected President of Ukraine.
Without mounting much of a defense, in March 2019, a man was taken into custody by Ukraine's intelligence service, according to The New York Times, confessed to purchasing Facebook accounts from the people who originally launched them. He aimed to use these accounts on behalf of Russian intelligence to spread disinformation about Volodymyr Zelensky, then the front-runner. Zelensky's campaign reportedly tried to report these apparent violations of Facebook's campaign advertising policy to Facebook's much-publicized "war room," which it established in October 2018, only to discover the company had dismantled it in November 2018.
So the Zelensky group built an automatic system -- a bot that would detect apparent disinformation about their candidate, and respond to it in milliseconds. Some credit that bot with Zelensky's victory.
There remains no incontrovertible evidence that disinformation campaigns were directly responsible for any of these world-changing events -- that agitation by social media changed people's votes, or that these trends are directly correlated. Yet that has always been the case with disinformation: It masks its own evidence trail.
All this may explain the serenity and self-confidence, if indeed his emotions can be read so clearly, with which Mark Zuckerberg wrote off this incident, along with all the other in the growing mountain, like a glitch.
"As our world has expanded," Zuckerberg told his conference audience at one point, "we face a new challenge: We all need to find our place in this much bigger world, and it can be hard to find and feel like you have a unique sense of purpose when you're connected to billions of people at the same time. Privacy gives us the freedom to be ourselves. It's easier to feel like you belong when you're part of smaller communities, and amongst your closest friends.
"So it's no surprise," the CEO continued, "that the fastest ways that we're all communicating online are private messaging, in small groups, and in stories. As the world gets bigger and more connected, we need that sense of intimacy more than ever."
Anyone who's written or outlined public speeches for a living can detect when a paragraph or a passage has been specially constructed to move the listener's attention past a glaring mistake. Zuckerberg's is the kind of language defense attorneys use to plea with juries. It wraps us in the warm blanket of an altruistic society and casts his platform as the source of that warmth. It invites us to make judgment calls that relocate the focus of our deliberations to how we feel, and whether we're happy.
Happiness is a chemical reaction. The neurochemical considered primarily responsible for the feeling of pleasure is dopamine. Human beings often choose behavioral patterns that streamline, or even hard-wire, the reception of dopamine. Addiction is often the result of hard-wiring the brain's dopamine-consuming reward centers in such a way that the behaviors triggering these rewards seem unavoidable.
Dopamine enters our story of the decline of social media at this point for a non-coincidental reason: Facebook's entire premise, some researchers and psychologists believe, is as a system geared to trigger the reward centers of its users' brains, as a means of continuing and ensuring their usage patterns.
The notion that social media can become the object of addiction is by no means new, especially to this publication. Almost nine years ago, ZDNet contributor Zack Whittaker shared his personal story of the similarities he noticed in his life, between when he gave up Facebook for a week and when he gave up smoking. Later, Zack shared his observations about the conditions one should recognize, both in one's life and mindset, that should serve as signals of being addicted to using social media platforms.
As one of history's greatest psychiatrists -- Lucy from Peanuts -- so poignantly explained, the first step in coming to terms with one's problem is to give it a name. Making something a "-phobia" or an "-ism" or an acronym, effectively de-humanizes it, recasting it as a foreign organism or a syndrome. My father, who spent the latter half of his career as a substance abuse counselor, noted that people seeking freedom from addiction tended to visualize their problems as demons attacking them from the outside, rather than manifestations from within.
As ZDNet reported, researchers developed a quantitative means of rating one's interactivity levels with Facebook, to determine whether one officially crossed the threshold into obsession. Quickly, someone designated an appropriate acronym for the catalyst of this obsession: FOMO (Fear of Missing Out). In 2013, the Oxford Dictionary officially adopted the term as a legitimate word. ZDNet's Eileen Brown reported on a survey claimed to present evidence that everyone suffers from FOMO to one degree or another. Which turned out to be a good thing, as marketers discovered clients could leverage FOMO to their advantage in attracting and retaining customers.
As with any substance that loses its rewarding effects over time, marketers should be warned that leveraging FOMO as a customer attraction tool, has its limits. As Harvard Medical School summarizes it, "Addiction exerts a long and powerful influence on the brain that manifests in three distinct ways: Craving for the object of addiction, loss of control over its use, and continuing involvement with it despite adverse consequences."
According to the Rand Corporation, results of a 2018 survey published by the UK's Royal Society for Public Health found a correlation among people aged 14 to 24 between use of Instagram, Snapchat, and Facebook and negative mental health symptoms. This correlation, argue the authors of King University's report "The Psychology of Social Media" is a physical one that, with the right equipment, may be visually observed: "The ventral tegmental area (VTA) is one of the primary parts responsible for determining the rewards system in people's bodies," King U. explains. "When social media users receive positive feedback (likes), their brains fire off dopamine receptors, which is facilitated in part by the VTA."
Around the same time, a University of Pittsburgh study published in the American Journal of Health Behavior examined the social media behavior patterns of 1,730 individuals ranging in age from 19 to 32. Participants in this study used social media to varying degrees and were clustered into categories based on their usage patterns. What concerned these researchers most was an apparent correlation between the high-usage "Wired" group's online behavior, and their other personal traits which are already associated with anxiety and depression:
It may be that this particular pattern of SMU [social media use] is indicative of a preoccupation with, and hyper-vigilant surveillance of, one's social media. For example, Wired individuals may routinely engage in attention seeking behaviors, reflected in high volume SMU, such as frequent status updates and subsequent checking for "likes." This preoccupation may lead to depression if the individual does not receive the desired feedback from his or her social media audience. Similarly, "fear of missing out" (FOMO), characterized by the desire to stay continually connected, and "Snapstreaks," metrics of consecutive daily "Snaps" between friends on Snapchat, may contribute to a hyper-vigilant social media surveillance. These social media-derived behaviors may mimic and contribute to symptoms of anxiety.
A subsequent project, this time not by psychiatrists but by the business school of South Korea's Sangji University, attempted to draw relationships between these and other, similar studies being conducted at the time. This project used statistical analysis to correlate what psychiatrists were seeing with the phenomena that social media users themselves were reporting in their survey results.
In fairness, the Sangji report ostensibly casts its own conclusions as "hypotheses." Yet it makes some bold, stark propositions: High social media use, particularly with Facebook, may initially be driven by people's need to belong. This need is especially intense among people prone to narcissistic personality disorder, whose dopamine centers are triggered by the need for self-presentation and admiration. Once the cycle of social media use begins, users who exhibit the traits of addicts also display traits of voyeurism, observing other people's lives as a substitute for the human relationships they have been unable to build for themselves.
That feeling of belonging and intimacy to which Zuckerberg referred -- of not missing out, of not just being part of something but being admired for it -- can be substantiated artificially in two ways. One is through the systematic, measured dispensation of status. This is the variable that was the focus of Nobel laureate economist John C. Harsanyi's career. In Harsanyi's time, there was no means -- scientific or automatic -- to quantify one's social status among a group, or in society at large. But this is exactly the currency of Internet-based social networking; the quantification of social status is precisely why Facebook was created. Social networks use status as a reward for participation; what's more, as the Sangji University study stated, they punish lack of participation with the removal of status, which triggers feelings of exclusion and anxiety.
The second method of artificial substantiation comes by means of the ad hoc generation of common belief systems -- worlds of arbitrarily assembled information that can masquerade as fact. In the absence of true interpersonal exchange, a social media environment can use belief in a common conceit as a placebo. The feeling -- perhaps the illusion -- of these items' factuality and relevance may be compounded by the feeling users have of what Zuckerberg called "intimacy," that despite their wide propagation, these items were tailored or filtered or expressly delivered just for them. These phenomena were validated not just by the psychiatric and business school studies, but by the Senate Select Committee on Intelligence, in its report on Russian interference in the 2016 elections [PDF].
Among the research, the Committee entered into evidence was an MIT investigation into Twitter's analytics [PDF], which concluded that a false news story, or a false item masquerading as a news story, was 70 percent more likely to be retweeted than a true one, especially in circumstances where the story had a political context. It also cited a report by the National Bureau of Economic Research in Cambridge, Massachusetts [PDF], which in tracked a sea of political tweets about the 2016 elections, produced by both humans and automated response systems ("bots"). In horrific graphic detail, the NBER report revealed how intensely human Twitter users responded with opposition or contrary positions (for example, argumentatively) to tweets from bots (blue lines, below) versus from other humans (grey lines).
Senators looking into the now overwhelming evidence of Russia's effort to disrupt US democracy, saw these charts. This is the actual disruption: A scientific measurement of compulsive reactions to political agitation. The need to belong, these time-series charts indicate, can be satiated by artificial entities to which no one can belong.
If anyone believes that the neo-Soviet disruptors behind this scheme were oblivious to how they leverage an Internet communications system to mount an attack on the psychology of democracy itself, they need to run a Google search on Vladimir Putin.
Or perhaps that's the wrong tack because the relevance of whatever that search may reveal could be a random variable. The Web, of which social media is a large chunk, is capable of substantiating whatever information that we believe to be, or that we wish to be, facts. If a reputable publication presents a fact or makes an assertion, its reputation does not have to be called into question for that assertion to be disputed. Someone, or something, on social media can simply deny that such reputation exists. If the validity, or even the authenticity, of the denier is in doubt, then someone, or something, can corroborate that doubt. Then if the corroborator is a false entity, a "sock puppet," the entire argument becomes moot. Nothing is real, to borrow a Beatles phrase, and nothing to get hung about.
During Mark Zuckerberg's congressional testimony in April 2018, in the last few seconds of his allotted time, Sen. Ben Sasse (R – Neb.) tossed out what he might have supposed to be a throwaway question: "Do social media companies hire consulting firms to help them figure out how to get more dopamine feedback loops so that people don't want to leave the platform?"
"No, Senator. That's not how we talk about this," responded Zuckerberg, with a sentence begging to be parsed, "or how we set up our product teams. We want our products to be valuable to people. And if they're valuable, then people choose to use them."
It may literally be that simple to the man: Value, in his mind, is whatever makes one happy.
This feature was updated after its initial publication in December 2019 with new information.
Learn more -- From the CBS Interactive Network
- Facebook agrees to pay £500,000 fine over Cambridge Analytica by Danny Palmer
- Facebook asked by lawmakers to pause Libra cryptocurrency project by Asha Barbaschow
- Congress considers a national standard for data privacy by Stephanie Condon
- This is Your Brain on Instagram: Effects of Social Media on the Brain by Kelly McSweeney, Now (Northrup Grumman blog)
- The Dopamine Seeking-Reward Loop, or, "Why Can't I Stop Scrolling on My Newsfeed?" by behavioral scientist Dr. Susan Weinschenk, The Team W Blog
- What Is Social Media Addiction? from Addiction Center