You've successfully subscribed to American Purpose
Great! Next, complete checkout for full access to American Purpose
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your newsletter subscriptions is updated.
Newsletter subscriptions update failed.
Success! Your billing info is updated.
Billing info update failed.
Facebook Sovereignty

Facebook Sovereignty

Francis Fukuyama

Neal Stephenson’s classic 1992 cyberpunk novel Snow Crash depicts a future United States in which all forms of hierarchical authority have collapsed and each suburban subdivision has become its own burbclave requiring visas and passports to enter. There is a character named Raven who is untouchable for crimes he commits as he rides around on his motorcycle, because he is a sovereign. He’s a sovereign because he has a nuclear weapon strapped to his back that he will detonate if anyone comes to arrest him.

It’s struck me in recent years that Facebook was on its way to becoming a sovereign. Sovereignty means that there’s no one with higher authority to control you, an attribute that is usually reserved for states. Facebook is now so large and powerful that it’s hard to say the government of the United States can control it. Like a state, it has started to accumulate the attributes of sovereignty; it has aspirations to create its own currency, Libra, and created what it called its Supreme Court, the Facebook Oversight Board. That board, filled with eminent persons drawn from academia, law, and international politics announced their decision on Facebook’s de-platforming of Donald Trump last Tuesday.
___STEADY_PAYWALL___
The judgment was a mixed one; while it did not recommend re-instating the former President, it threw the matter back into Facebook’s lap saying that the company needed to come up with clearer rules for what kinds of political statements and actions were acceptable on the platform. To the extent that the Oversight Board issued a judgment on the merits of what should and should not be allowed, it suggested that international human rights law protecting freedom of expression should be the standard.

A lot of the informed (as opposed to simply partisan) commentary on the decision revolved around the meta-question of what Facebook thought it was doing establishing a “Supreme Court” in the first place. Critics pointed out that neither Facebook nor its Oversight Board had any real democratic legitimacy. Facebook as a private company would not be bound by the Board’s decisions, and taking the Board seriously as a legal entity bought into a false narrative that gave legitimacy to what was in the end an elaborate corporate PR scheme. Facebook had already questioned some of the Board’s decisions and refused to answer some of the questions that the Board posed to it.

In the discussion hosted by the Stanford Cyber Policy Center on Thursday, Alex Stamos, Facebook’s former chief of security and now head of the Stanford Internet Observatory, argued that these criticisms missed the point. Facebook should not be compared to a state, but to other companies. The big problem with the internet platforms was their lack of transparency in the way in which they came to content curation decisions; Facebook at least was opening up that process for public scrutiny and discussion. YouTube, owned by Google/Alphabet, decided not to de-platform Trump and was flying under the radar because it was less open about its own decision-making.

Stamos is correct that by the standard of corporate transparency, Facebook should be applauded. But there is a deeper point at work here that needs to be emphasized. Facebook is not a state and never will be; pretending that it has state-like institutions such as a Supreme Court is highly misleading and distracts people from the really difficult problem that needs to be solved.

As Nate Persily has argued, it is inappropriate to try to apply international human rights law to Facebook content moderation decisions. Human rights law applies to governments, where free-speech principles would make many existing content takedown decisions impossible. Yet this is a function that Facebook has to perform. It regularly and for many years has banned from its platform sexual content, cyberbullying, terrorist incitement, and content that is violent or degrading. Most of this content would be protected under the U.S. First Amendment. A totally unmoderated large platform that permitted as much freedom of speech as the U.S. First Amendment allows would turn it into a sewer of bad material, and the vast majority of people should be glad that it is being kept off. (This is also the problem of the Republicans’ drive to revoke Section 230 of the Communications Decency Act immunizing the platforms from private liability.) Here the more open forum of the Oversight Board is actually useful in setting boundaries around the company’s content curation policies.

On the other hand, there is a big problem with regard to overtly political speech. Facebook, Alphabet, and Twitter between them have as much influence over what people see and hear about politics as the three broadcast networks did back in the 1960s.  In this realm, there is no political consensus with regards to what constitutes unacceptable limitations on speech, as exemplified by Republican complaints about the de-platforming of Donald Trump.  It would seem, however, that something much closer to First Amendment principles ought to apply here:  it is not up to these private platforms to stamp out conspiracy theories or bad information.  What should not be happening is the deliberate amplification or silencing of certain voices by the platforms on the basis of what maximizes the attention of their users and therefore their advertising revenues.

The problem is a very acute one for American democracy right now.  According to recent poll data, some 70 percent of Republicans believe Trump’s false narrative about how the election was stolen from him.  American politics will revolve around this issue in the coming years, as it already has with Republican efforts to restrict ballot access in many states.  It is hard to think of a time when our system has had to deal with so malicious and irresponsible a political leader as Trump, but there he is, continuing to spout the stolen election line from his perch at Mar-a-Lago.

The policy question is: should it be up to the platforms to deliberately suppress his stolen election narrative, in the way that the broadcast networks might have done fifty years ago?  Here I think the answer is no:  the problem has its origins out in American society; however dangerous the election fraud narrative is, our basic principles of free speech do not permit anyone, either the government or a powerful private platform, to try to prevent its expression.  If Donald Trump were to run for President again in 2024, it is hard to imagine that the platforms could refuse him access while granting it to other candidates.  Even if the platforms moved in this direction, it is not clear that they would do anything other than contribute to the conspiracy theorizing that is already out there.  What we should focus on instead is preventing the amplification of this narrative, beyond the 30 percent or so of Americans who already believe it.

(The Stanford Working Group on Platform Scale has proposed a solution to the latter problem that would take away the political content moderation function from the large platforms and hand it over to a competitive layer of middleware providers, which would allow users to make their own decisions as to the type of content they saw.  This would not solve our false narrative problem, but it might reduce the degree to which the platforms were contributing to it.)

Frankly FukuyamaTechnologyDemocracyUnited States