I am old enough to remember when Twitter billed itself as “the free-speech wing of the free-speech party”. Heck, I can even remember John Perry Barlow’s hippie-libertarian “A Declaration of the Independence of Cyberspace” — a place “where anyone, anywhere, may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity”.
Well, “it’s the morning after the free-speech party, and the place is trashed”. Don’t take it from me. Those words come from twentysomething Adam, a content moderator in one of the “trust and safety teams” now employed by Facebook, Google and the other network platforms to detect and remove “hate speech”.
Last week, YouTube announced that it was “specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status”. It swiftly became clear what that meant in practice. On Wednesday, as The New York Times reported, “numerous far-right creators began complaining that their videos had been deleted or had been stripped of ads, presumably a result of the new policy”.
As if to test YouTube out, a Vox journalist named Carlos Maza demanded that it ban Steven Crowder, the host of a raucous political show, on the grounds that Crowder had repeatedly made homophobic jokes about him. “I’ve been called an anchor baby, a lispy queer, a Mexican, etc,” Maza wrote on Twitter. At first, YouTube resisted, arguing that Crowder hadn’t violated the company’s terms of service, but soon — partly under pressure from other employees of Google, its owner — it folded, announcing that it had “suspended this channel’s monetisation . . . because a pattern of egregious actions has harmed the broader community”.
I knew nothing of either Maza or Crowder until last week. The point of this column is not to defend the latter or the obnoxious “Socialism is for fags” T-shirt he sometimes wears on his show. The point I want to make is the more general one that free speech on the internet is in free fall.
Crowder has company. The co-founder of the English Defence League, Tommy Robinson, also had his YouTube presence restricted this year. Last month, Facebook banned not only the conspiracy theorist Alex Jones but also the alt-right provocateur Milo Yiannopoulos, the white supremacist Paul Nehlen, the African-American Muslim zealot Louis Farrakhan and the national activist Laura Loomer.
And those are just the better-known names. In a recent report, Facebook boasted that the proportion of hate speech it found “proactively” — before users reported it — had risen to 65% in the first quarter of 2019.
Having previously confined themselves to removing paedophile and terrorist content, the big tech companies are now openly engaged in political censorship. Google admits as much: an internal presentation last March was actually entitled “The Good Censor”. What this means in practice is that tens of thousands of content moderators such as young Adam are deciding what you can and cannot see online.
Here’s another of them, talking to Silicon Valley lawyer Alex Feerst: “I was like, ‘I can just block this entire domain, and they won’t be able to serve ads on it?’ And the answer was, ‘Yes.’ I was like, ‘But . . . I’m in my mid-twenties.’ ” In Nineteen Eighty-Four, George Orwell’s vision of the future was “a boot stamping on a human face — for ever”. In 2019, it turns out to be a geek hitting “delete” on a keyboard for ever.
You may not care for any of the people I have mentioned thus far. You might still not care if I tell you that interviews my wife and I have done (for, respectively, the US online broadcasters Dennis Prager and Dave Rubin) have been “demonetised” by YouTube, meaning that advertisements were not associated with them and therefore Prager and Rubin could not earn money from them. The point is not who gets censored or demonetised. The point is that companies as big and ubiquitous as Google and Facebook should not have this kind of power. Even Mark Zuckerberg agrees that “we have too much power over speech”.
When so many people now read an article such as this after being directed to it by one or other of the tech platforms, it is true to say that the platforms are, in the words of recently retired Supreme Court justice Anthony Kennedy, “the modern public square”. Yet they are emphatically not acting in that spirit — unless it was Tiananmen Square Kennedy had in mind.
Remember, the First Amendment to the US constitution bars Congress from “abridging the freedom of speech, or of the press”, and the Supreme Court has allowed few exceptions. Much more than in Europe, US courts are reluctant to penalise speech, even when plaintiffs allege defamation, invasion of privacy or emotional distress.
But none of this applies online, where (in the words of two legal scholars) the big tech companies can “act as legislature, executive, judiciary and press”. For they are doubly protected. First, the First Amendment is generally held not to apply to private companies. Second, section 230 of the 1996 Communications Decency Act explicitly states that “interactive computer services” are not publishers (so, unlike newspapers, they can’t be held responsible for bad stuff that appears on their platforms), but they also cannot be “held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that [they] consider to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” (so they can’t be accused of restricting free speech when they delete bad stuff).
To call section 230 — which was enacted when the internet was in its infancy — an anachronism would be an understatement. It would be more accurate to say that it is the Catch-22 of our time, in that the big tech firms are not publishers when harm arises from the content on their platforms, but are publishers when they engage in censorship. Either way, they have minimal legal liability.
Yet section 230 (a) (3) explicitly assumed online platforms would “offer a forum for a true diversity of political discourse”. And the phrase “or otherwise objectionable” was never intended to cover political positions.
These days, in Washington, there is a great deal of discussion of “breaking up big tech” by resuscitating or reforming competition law. Other voices (including, suspiciously, the big tech companies) clamour for more regulation. But the free-speech crisis can and should be simply addressed. The network platforms handle far too much content to be effective publishers. They are entitled to section 230’s protection — but only if they uphold the diversity of discourse envisaged by Congress.
The alternative is to repeal section 230 and impose on big tech something like a First Amendment obligation not to limit free speech. Speaking as one of the last surviving members of the free-speech party, I’d prefer that second option. But either would be an improvement on that geek hitting “delete” on a keyboard for ever.
Niall Ferguson is the Milbank Family senior fellow at the Hoover Institution, Stanford