Article Thumbnail

What Does Freedom of Speech on the Internet Mean in a Mass-Shooter World?

It used to be that ‘free speech is for the speech you hate, not the speech you like.’ But once again, the internet has changed everything.

In the middle of the night on Sunday, August 4th, the notorious forum site 8chan went dark. 

Designed from inception as a safe haven for all forms of speech, no matter how hateful or disturbing, 8chan’s reputation and influence has grown rapidly in recent years. Often, it made headlines as a hub for mass killers — the El Paso shooter posted a final message on 8chan, as did the Christchurch shooter, as did the Poway synagogue shooter.

The internet security service provider Cloudflare, however, saw enough from its client. “We reluctantly tolerate content that we find reprehensible, but we draw the line at platforms that have demonstrated they directly inspire tragic events and are lawless by design. 8chan has crossed that line,” the company stated in a release leading up to the shutdown

And just like that, 8chan was left basically inoperable, without the network service needed to protect its users. Many cheered the site’s downing as a win in the fight against online hate speech and the brutal violence that trails it. Even the young founder of 8chan, Fredrick Brennan, spoke out against the platform he had borne. “Shut the site down,” Brennan told the New York Times. “It’s not doing the world any good. It’s a complete negative to everybody except the users that are there. And you know what? It’s a negative to them, too. They just don’t realize it.”

Yet the shutdown has added the perfect fuel to a fiery debate about what freedom of speech means on the internet, especially within the political context of 2019. Critics of the status quo say that the spread of hate speech under the auspices of personal expression has weaponized forums and chat rooms, with the power of anonymity propelling radical, dangerous world views toward lonely young men and sociopathic messiahs-in-training. 

The perception of overreaching censorship, meanwhile, has ignited a counter from those who believe a free and open internet is a largely unmoderated one. They fear that government will wade too far into the waters of censoring certain forms of speech; groups like the American Civil Liberties Union also see private companies like Twitter and Facebook as too big and ubiquitous to be silencing individuals for something as subjective as hate. “There are reasons why viewpoint neutrality is the hallmark of First Amendment jurisprudence. Censors can behave in unpredictable, arbitrary and capricious ways — and no one has a sufficient monopoly on truth to serve as philosopher king over speech and debate,” National Review writer David French argues in an op-ed dubbed “The Social Media Censorship Dumpster Fire,” which coalesces many of the arguments against further regulation.

These are swift and confusing waters to navigate. In a twist, President Donald Trump on Friday called for government intervention in “bias” against “conservative voices” on social media platforms, more or less changing the national focus from the extremely online, white supremacist behavior of the El Paso shooter to the aggreivances of alt-righters hit with bans. Trump-friendly FCC chairman Ajit Pai has also beat the drum for tech companies to remain neutral and stay out of moderating speech (ironic, considering his antagonism to, well, actual net neutrality). “The greatest threat to a free and open internet has been the unregulated Silicon Valley tech giants that do, in fact, today decide what you see and what you don’t,” Pai said in a Senate Commerce Committee meeting in June. 

Amid this rhetorical mess, evidence is mounting that the rise of racist, sexist and otherwise hateful speech online is mirroring a substantial rise in hate groups and hate crimes across the U.S. 

An FBI analysis found hate crimes in the country rose 17 percent in 2017 over the previous year (a third straight year of increases). Nearly half of crimes was motivated by race, half included black Americans and 11 percent was anti-Latino, the report noted. Notably, additional research from the Center for the Study of Hate and Extremism at California State University, San Bernardino, found that racist speech, including from U.S. pols like Trump, seemed to give a boost to those considering a violent, prejudiced act. “We see a correlation around the time of statements of political leaders and fluctuations in hate crimes,” Brian Levin, director of the center, told the Associated Press. “Could there be other intervening causes? Yes. But it’s certainly a significant correlation that can’t be ignored.”

So if mainstream comments reported by the likes of CNN and Fox News can have an influence on hate crimes, how do we consider the impact of of 4chan, 8chan, Reddit and other communities where extreme ideals bloom?

Annemarie Bridy, a law professor at the University of Idaho and an expert on internet rights, tried to unfold the legal and moral argument around this in her 2018 paper titled “Remediating Social Media: A Layer-Conscious Approach.” In it, she argues that the principles of free and democratized speech on the internet has radically warped what content is spread and how arguments unfold. “One of the more disturbing findings to come out of recent studies of social media use is that users find false and inflammatory content more engaging and shareable than true and uncontroversial content,” Bridy writes. “As social media platforms currently operate, they’re finely tuned to propagate and amplify extreme and outrageous speech.”

Trying to apply the principles of the First Amendment to social media platforms hasn’t had the intended positive impact, Bridy continues. Ideally, freedom of speech empowers everyone to speak and be heard, with observers of an argument encountering “competing rational arguments for and against controversial propositions.” But the modern internet has broken that balancing effect, she continues, with forces like bad-faith bots and content targeting encouraging users to get hooked on ideas that stoke their most toxic beliefs. “The [First Amendment] process of truth-finding through truth-testing bears little resemblance to the algorithmic sorting that creates winners and losers in social media’s attention sweepstakes,” Bridy writes. “This algorithmic personalization contributes to the filter bubble effect that social scientists have linked to increasing social polarization and identity politics.”

Like Bridy, Robert Hernandez has grown concerned over the last decade while studying how misinformation and vitriol influences online users. Hernandez, a professor of journalism at the University of Southern California who focuses on tech and innovation, sees a major challenge in how social media companies profit most when user engagement is high. “Facebook, Twitter and YouTube should have the responsibility of knowing what to take down and how to stop algorithms from perpetuating hate. But there’s a hesitation because of money,” Hernandez says. “Twitter knows how to purge trolls and bots. They’ve done it before. But they won’t go far, because their user base just gets slashed. Still, studies show that bots and manipulative tactics create extreme outrage on a place like Facebook, and that drives engagement.” 

Americans can turn their eyes to Europe, where free-speech protections are weaker than the U.S., as a test case for the future. Germany has enacted some of the fiercest laws on hate speech, requiring platforms remove offensive material within 24 hours or face more than $50 million in fines. Spain has convicted people on an anti-terrorism law that was modified to include social media posts. The U.K. is debating a sort of “content czar” to ban websites. More largely, the EU is mulling a law to require rapid removal of material that promotes terrorism. 

The results are, so far, a Rorschach test for online speech rights. The mandate to handle hate speech complaints has forced companies to respond to harassment and toxic material with newfound speed and confidence. But there are numerous cases in which people were kicked off platforms or even taken to court over satirical speech, including one man’s song lyrics about German Chancellor Angela Merkel and a 21-year-old Spanish woman’s online jokes about the historical assassination of a Spanish prime minister… in 1973. And it is concerning that some social media companies appear to be throwing up their hands and putting the onus on different governments to dictate different rules for their users. “The question of what speech should be acceptable and what is harmful needs to be defined by regulation, by thoughtful governments,” Zuckerberg said in a May summit while seated next to French president Emmanuel Macron. 

This is an appropriately frightening thought for anyone who sees the internet as a resource that should be free of the whims of nation-states, and critics argue it’s a slippery slope toward censorship in the vein of what you see in a place like China. It’s one reason why Taylor Lorenz, a writer at The Atlantic who reports extensively on online culture, believes that regulation of hate speech needs to focus on private companies shutting down individual bad-faith voices. Too often, she says, the point is lost when companies ponder how to make a site-wide template or algorithm for moderating speech in a misguided attempt to appear “fair.” 

“What these companies need to do is actually enforce their existing policies and say, ‘Hey, this isn’t the community we want to facilitate,’” Lorenz tells me. “In the same way YouTube no longer wants ISIS actively using the platform, these companies can do the same with white supremacists. It’s not that hard to identify bad-faith actors who spread misinformation and hate. But there’s a lack of moral courage for these tech companies to do that.”

While there were reports that YouTube’s shutdown of ISIS and al-Qaeda on the site inadvertently censored journalists and researchers, it also effectively cleared out dangerous pro-terrorist content and voices. That example has led experts to call for the revoking of “safe harbor” laws in the U.S. that protect companies from being sued when speech on their platforms leads to actual harm. 

The growing science around how hate speech affects our brains and actions suggests that the clock is ticking on direct action that culls the toxic networks online. The ability for anyone to share and curate opinions was a critical concept in the development of social media. But now, even the guy who invented the retweet button is regretting the tool’s effects on mob mentality and how it became a “force multiplier” for viral hate speech. 

That man, veteran tech developer Chris Wetherell, told BuzzFeed News that the vicious GamerGate movement showed how rapidly false information could spread on Twitter, making targets of innocent people in the process. “It dawned on me that this was not some small subset of people acting aberrantly,” he said. “This might be how people behave. And that scared me to death.”

GamerGate is a good example for something Lorenz tells me: That online hate speech isn’t merely an expression of liberty, but a weapon that ultimately aims to silence or intimidate certain groups, whether through online harassment or a shooting in a Walmart in the desert. Meanwhile, Hernandez says things will only get harder in the future, as the manipulation of images and sound (in things like deep fakes and sophisticated redubs) further ruins the integrity of information. “It’s critical that we begin training younger generations in media literacy so they can identify objective information and nuanced views within the content swamp out there,” he explains. 

He also points out the world kept turning in the aftermath of Twitter bans on toxic people like now-irrelevant alt-right darling Milo Yiannopoulos and red-faced Infowars boss Alex Jones. There is some evidence that shutting down users or websites like 8chan funnels people into the recesses of less mainstream platforms like Gab or Zeronet, but Bridy and other experts note that reducing the overall number of people who interact with hateful rhetoric has a tangible benefit. “Neutrality on the internet has its place. That place is not social media platforms. Not now,” she concludes in her 2018 review. 

In a touch of irony, this is the exact opposite of French’s take in the National Review. “Let’s return to First Amendment principles online — and let the chips fall where they may,” he writes. 

Others, however, argue with ever-more convincing force that we’ve already seen how the chips fall, in places like Christchurch and El Paso, when things are left to chance.