You've successfully subscribed to American Purpose
Great! Next, complete checkout for full access to American Purpose
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your newsletter subscriptions is updated.
Newsletter subscriptions update failed.
Success! Your billing info is updated.
Billing info update failed.
Freedom of Speech

Freedom of Speech

It is Twitter's platform scale, rather than its users' speech, that should worry proponents of democracy. Francis Fukuyama's latest.

Francis Fukuyama

Freedom of speech is one of the core freedoms of a liberal society. The ability to criticize powerful people and organizations serves as a check on their power, and the act of speech itself is an integral part of one’s personal autonomy.

Nonetheless, all liberal societies have placed limits on speech, limits that are inevitable given the dangerous potential of certain forms of speech. The rise of the internet has created a special problem for freedom of expression and has led to substantial confusion on the part of many people. An example of that is Elon Musk, who took over Twitter claiming that he was a “free speech absolutist.” That remark showed that he had not even begun to think through the complexities of managing speech in the internet/big platform era, and we are all paying the price for that now.___STEADY_PAYWALL___

Speech has never been totally free in any modern liberal society. Many European liberal democracies have had restrictions on certain forms of political speech such as Holocaust denial, which long predated the rise of the internet. America’s First Amendment was always less restrictive, but even here there are limits. Contrary to popular belief, you can yell fire in a crowded theater, but the Supreme Court has ruled the open incitement of violence to be off-limits.

Moreover, the scale of a media channel makes a big difference in the legality of limits to speech. The Federal Communications Commission has long regulated what can be broadcast on over-the-air television, banning hardcore pornography, graphic violence, and other types of content. Indeed, it used to enforce something called the Fairness Doctrine, in which broadcasters were enjoined to promote “balanced” coverage of divergent political views. The consistency of these constraints with the First Amendment was upheld by the Supreme Court in its 1968 Red Lion Broadcasting v. FCC decision. (Conservatives never liked the Fairness Doctrine because they believed it was tilted against them, and the doctrine was rescinded by administrative decree in the 1980s). Restrictions on broadcasters’ speech was allowed by the Court due to considerations of spectrum scarcity, the uniqueness of broadcasting, and public interest.

We are in a comparable situation today with the rise of the internet and social media over the past decade. Spectrum is obviously no longer limited, but network economies have served to drive internet communications into a small number of very large platforms, of which there are three: Twitter, Meta (Facebook), and Google. In many ways these large platforms are comparable to the three broadcast TV channels of the 1960s, influencing not just political opinion but what citizens believe to be factual information. The internet enables the amplification of particular views and information on a scale and with a speed that is historically unprecedented, and also allows the large platforms to in effect block disfavored speech through takedowns.

In strict Constitutional terms, the First Amendment applies only to government restrictions on speech. Platform decisions to promote or take down material that they carry is actually protected under the First Amendment, so there is a narrow sense in which conservative charges that Twitter and Facebook are engaging in censorship is wrong. In addition, Section 230 of the 1996 Communications Decency act has from the beginning protected the platforms from private litigation.

The big platforms have used their power to control the content they carry on a regular basis, which is comparable to the policies of TV networks. They do not permit users to post graphic pornography or violence, promote terrorism, or incite violence (this was the grounds on which Donald Trump was barred from most platforms after January 6). With the exception of the latter, this kind of content moderation has largely been regarded as uncontroversial.

Nonetheless, modern democracy has a free speech problem. We need to be specific here: the problem is not that various forms of toxic content such as conspiracy theories, fake news, and hate speech are available over the internet. The First Amendment as it has been interpreted over the years protects these forms of speech. The problem is different and relates to platform scale: the platforms have an unprecedented power to either amplify certain messages or suppress disfavored speech. They have, moreover, substantial ability to microtarget their audiences in ways that earlier advertisers couldn’t achieve. It is platform scale and power that should worry proponents of democracy and properly be the objective of public policy, and not the mere fact that they are carrying politically toxic material. In the 2020 White Paper authored by the Stanford Working Group on Platform Scale (which I chaired), this power is likened to a loaded gun left on the table. Presently, we have to trust that the person sitting across from us wouldn’t pick up the gun and shoot us with it, but there was no legal way of preventing that from happening.

Elon Musk’s claim to be a “free speech absolutist” constitutes an incoherent posture. Presumably this absolutism will not extend to permitting child pornography or snuff movies to be shared. How about tweets purporting to show that Covid vaccines are harmful, or doxing election officials in the midst of a contested election in a swing state? There is a fascinating case of yoga moms being drawn into QAnon because a prominent yoga guru became an adherent; the suggestion algorithm simply picked up this connection on the assumption that anyone interested in yoga was likely to be interested in QAnon as well. Should social platforms make efforts to stop making similar suggestions?

Given the platforms’ power to amplify or suppress, free speech absolutism is simply not a tenable position. The internet has become such a sewer that content moderation is absolutely necessary. The question is, who has the legitimacy to make these complex decisions with regard to political speech? Most of us would agree that the government should not be in charge. There are some European countries like Germany that are trying to regulate the internet in this fashion, but we Americans would never reach consensus on what is acceptable political speech given our polarization.

In lieu of such consensus, many people have settled on pressuring the platforms to take on this responsibility. But a for-profit corporation has neither the capacity nor the legitimacy to make such intensely political decisions. Our Working Group on Platform Scale suggested the concept of middleware as a way out of the problem—outsourcing content moderation to a layer of competitive third parties that could tailor choices to the preference of individual users. But there is no business model currently supporting this, and middleware would be fiercely resisted by the platforms themselves.

Over the past few weeks Elon Musk has revealed himself to be a MAGA-adjacent conservative who indulges in a lot of Trump-like trolling. For example, he decided to go after Alexander Vindman, the whistleblower who exposed Trump’s effort to block aid to Ukraine leading to his first impeachment.

Having promised to set up an independent body to set content moderation standards, he backed away from that and started making arbitrary decisions as to who to allow on the site. We will never know what other similar choices he has made, particularly with regard to takedowns that are very hard to monitor. He’s now saddled himself with the responsibility for personally making content moderation decisions, like his recent choice to ban Kanye West once more for anti-Semitism.

The one thing Musk has said that makes sense is a distinction between freedom of speech and freedom of reach:

This comment correctly points to the central problem of the platforms: not so much their content per se, but their ability to amplify or silence. If he were serious about this, he would take away the ability to “like” and retweet harmful content, but not take it down. But this means that Twitter remains in the business of policing content, just using different tools and criteria. Implementing such a policy would still require quite a large bureaucratic apparatus, and I doubt that he is prepared to really go down this route.

Image: A microphone with brass knuckles. (Unsplash: Francesco Tommasini)

CultureDemocracyTechnologyFrankly Fukuyama