Can the internet shut off the valve that fuels mass shootings?

Opinion: Perhaps the web, or certain elements of it, are not to blame for inciting the hatred that fuels acts of domestic terrorism. But it does seem to play some sort of accessory role, and you’d think that could be stopped.

We have reached the point in our history as a society where every week, and nearly every day now, something launches a new cycle of grief. We witness the damage that domestic terrorists inflict on human beings, and we struggle to ascertain what we can do as human beings to stop them. Some of us pray. Others would rather prayer be replaced with action. Others are stunned by the magnitude of our collective incompetence as a society to take action. And in coping with that bewilderment, some of us pray.

We wonder (in my case, aloud) what role the internet, or more specifically, the web, plays in catalyzing and sustaining this vicious cycle. And if it is indeed technology that fuels this continuing bonfire, is there some knob we can twist or switch we can throw or line we can cut to put an end to it, or at least to bring the fire under containment?

Pay no attention

To blame the media has become such a common reaction to tragedy that this nation's most watched cable news media service has built its business model around blaming the media. In a statement that was issued earlier this week, the President blamed the media, and additionally the internet, for having stoked the fire that led to the mass shootings in El Paso, Texas, and Dayton, Ohio.

The immediate reaction-at-large, as usual, has been to deny that any single societal component is at fault, the media most certainly among them, but also violent video games and the general state of mental health. It's easy to uphold the virtues of a free press when it comes under attack for its overriding principles rather than its underlying mechanics.

Also: Guns or video games: What kind of stand do businesses dare make? 

In 2012, a group of prominent websites staged a mass protest against an anti-piracy bill being considered in Congress, by diverting traffic from their own sites to the host of an anti-censorship petition. The Stop Online Piracy Act (SOPA) would have called for DNS servers to deny users' access to blacklisted websites, where pirated content was found to be distributed. SOPA was eventually defeated, in the wake of the case being made that any mechanical effort to thwart users' free access to a website was contrary to the rights of humankind.

The point was made: Any system that could divert traffic from a website accused of piracy, could be co-opted for diverting traffic for any other reason. It would have been a censorship switch, and SOPA opponents rejected the idea wholesale. There is an inherent danger in granting anyone in a position of authority access to a switch that determines what information you can or cannot know.

Yet via the high-pressure pipeline of social media, every day we see a level of vitriol so potent that many of us beg for someone to shut it off.

Over at our sister site CNET, Ian Sherr initiated a one-sided dialog with Twitter CEO Jack Dorsey. Taking a side bolstered by a wellspring of public support, Sherr has taken Dorsey to task for not effectively policing the harassment and bullying content on his network, in accordance with its own hateful conduct policy. "It's the hate campaigns, the racism, the intimidation, the deadly assault and the Russian interference in the US election. All of it," wrote Sherr. "Reality is coming down hard on social networking, and no one seems more publicly oblivious than you."

Last year, Facebook began an effort to more actively monitor, moderate, and police its content, using tools which include AI-based "computer vision" functions that presumably include neural networks. Such tools can, according to Facebook, effectively identify nudity in photographs and hateful speech in text. Discussion community site Tumblr began a similar effort last December, with instantaneously hilarious results: Its algorithms began flagging the interiors of caves, archaeological digs, and cartoons that include heart shapes as pornographic.

Also: Tim Cook slams 'insanity' of US inactivity on gun violence after two mass shootings (CNET)

But the manifestos posted by suspects in these latest mass shootings appeared not on any of these more generally proliferated services, but on 8chan, an anonymous chat forum that eschews any policing or self-censorship. The site's very name is a monument to the foolishness of artificial internet traffic diversion. It's the spiritual successor to 4chan, to which ISPs including AT&T blocked access a decade ago, concerned about its links to both hate speech and pirated content. At that time, my long-time friend and colleague Tristan Louis wrote for his personal blog, "If an external party can control when or how you can use a device or decide on what you can or cannot see, or select what programs you can install on it, are you still owning it?"

Cloudflare, one of the world's major content delivery networks, terminated 8chan as a customer last week. On his company's blog, CEO Matthew Prince explained, "We reluctantly tolerate content that we find reprehensible, but we draw the line at platforms that have demonstrated they directly inspire tragic events and are lawless by design. 8chan has crossed that line."

The key phrase in the above citation is, "we draw the line."  History will mark that someone stepped forward to draw a line.

On Twitter, there had been a small uprising urging Cloudflare and other service providers to discontinue service to 8chan. After the company's move, there was a small celebration, along with notes of hope and optimism that 8chan would soon become invisible to the world.

Release valve

A decade ago, the idea that someone could flip a switch and make the speech (or whatever you wish to call it) of thousands of unknown, fake-named people inaccessible, was a catalyst for the net neutrality movement. When the SOPA protest diverted traffic in what was described as the digital equivalent of a revolutionary march, the Electronic Frontier Foundation praised 4chan, among others, for its participation. Yet in the wake of the mass shooting last May in Christchurch, New Zealand, whose perpetrator had left a trail on 8chan, the EFF turned its objections to the idea of ISP blockage down to about a 2, labeling it at the bottom of a buried, bulleted list "clearly a relevant concern."

And now, after El Paso, 8chan's own founder, in an interview with The New York Times, remarked, "Shut the site down. It's not doing the world any good."

Somewhere between 2009 and 2019, a line had been crossed. One wonders, on the human toll odometer of mass shootings, exactly what the digits might have read at that momentous occasion.

We vehemently object to the idea that anyone could flip a switch and make our ramblings, our cat videos, and our "Friends" reruns go away. We click a button in a petition and we declare ourselves participants in a virtual march for freedom -- the digital successors to Rosa Parks and John Lewis. Yet we demand that the Jack Dorseys, Mark Zuckerbergs, and the other social media CEOs throw the very same switch -- just not in our name. We're asking human beings to lead a police action against a tidal wave of content that we're afraid of using technology to control for ourselves.

Technology has always been about automation -- taking a process with a set number of steps that can be standardized, and enabling a processor to do it on people's behalf. As awkward as our first attempts at this may be, we know it is probably feasible for an algorithm to effectively and efficiently spot bullying, harassment, shaming, and threatening of individuals or of peoples.  We're just scared to enable that switch, because we know in the process, we're giving up some part -- perhaps a large part -- of our great dream of the web as the digital mesh that bonds together a society. And also, we're afraid of history perceiving us as having been on the wrong side of an argument.

Yet every day, before we enter a public aircraft, we take our shoes off and let an X-ray machine scan our bodily cracks and crevasses, knowing that in the process, countless lives may have already been saved.  Benjamin Franklin, for all his homespun wisdom about liberty and security, never lived one minute in the 21st century.

If what Cloudflare did to 8chan was courageous, how was what AT&T did to 4chan cowardly?

We cannot blame the internet for the motivations, intentions, or actions of mass shooters. But we cannot deny that gathering them together into a plurality has given them a potency they would not have had otherwise. And here we are, with the power to reduce this potency with the turn of an automatic lever within our reach, debating whether the greater evil lies with their actions or with ours. Some of us pray.

[Scott M. Fulton, III is the author of this document and is solely responsible for his content.]