You've successfully subscribed to American Purpose
Great! Next, complete checkout for full access to American Purpose
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your newsletter subscriptions is updated.
Newsletter subscriptions update failed.
Success! Your billing info is updated.
Billing info update failed.
Why Middleware Is the Only Solution to Platform Power
Photo by Avi Richards / Unsplash

Why Middleware Is the Only Solution to Platform Power

In our polarized internet age, neither social media platforms nor the state can be counted on to regulate content. Middleware may be the best approach, argues Francis Fukuyama in his latest blog post.

Francis Fukuyama

Elon Musk’s assertion, at the beginning of his Twitter takeover, that he was a “free speech absolutist” was in itself sufficient to demonstrate that he hadn’t begun to think through the problem of online freedom of speech.

My Stanford colleague Daphne Keller explains why this is a nonsensical position. There is a huge category of speech that is, under the U.S. First Amendment, “lawful but awful.” Hardcore pornography, depictions of extreme violence, bullying, or rants encouraging hate and violence are all legally protected speech under well-established interpretations of the U.S. Constitution. The big existing internet platforms—Twitter, Google, Facebook, and now TikTok—all filter out much of this content because none of their users want to see their feeds filled with such material. As a result, Twitter’s new owner has retreated from “free speech absolutism” to acceptance of the need to take down content that is “wrong and bad.”

What Musk was really concerned about was not freedom of speech, but rather a perceived liberal bias in Twitter’s former content moderation. And indeed, he has shifted that bias in a more conservative direction over the past months, restoring the accounts of a number of conservative voices who had been deplatformed under the old regime, like Jordan Peterson, Ben Shapiro, and Donald Trump (who has yet to tweet since being unblocked).___STEADY_PAYWALL___

There’s nothing wrong with this if you yourself are a conservative. But this shift does nothing to address the underlying problem, which is the power that three and a half large platforms have over political speech, not just in the United States, but around the world.


There are now four cases before the courts that seek to address the problem of the platform’s power to shape speech. They move in opposite directions. Two of them, Google v. Gonzalez and Twitter v. Taamneh, were filed by families of victims of terrorist violence who contend that the platforms have been insufficiently vigilant in taking down potentially harmful material. The other two cases revolve around new laws in Texas and Florida that treat the platforms as “common carriers” that “must carry” various forms of political speech, with the clear intention of forcing them to carry conservative voices. All four of these cases are likely to come before the Supreme Court (the arguments in the Gonzalez case were heard by the Court this past week).

Up to now, it has been up to the platforms themselves to decide on content moderation rules. They were empowered to do this by Section 230 of the 1996 Communications Decency Act, which shielded them from liability for their content moderation decisions. In addition, they argue that as private companies, their content moderation decisions are protected by the First Amendment, as in the case of other media providers. Should the Gonzalez and Taamneh cases be upheld by the Court, the Section 230 shield would be weakened, possibly in ways that would induce the platforms to take down much broader swathes of legitimate speech for fear of liability.

The Texas and Florida laws take the opposite tack, using government power to force the platforms to carry what the bills’ authors regard as legitimate political speech. The two bills are long and poorly drafted, carving out exceptions to the must-carry rule that are unclear; these laws would be very difficult to enforce. One possible version of this approach would be to resurrect a version of the old “fairness doctrine,” in which the Federal Communications Commission required “balanced” coverage of political issues (a policy whose constitutionality was upheld in the 1969 Red Lion decision). Conservatives have long opposed such an approach, and the current bills envision state-by-state speech regulation. How a patchwork of 50 different content-moderation rules would work in a global internet is not clear. Moreover, conservatives should worry that such must-carry rules will empower left-wing speech that they dislike.

So we are left with this conundrum. We should be very dissatisfied with the situation where control over political speech is given to a small handful of powerful private companies. Even if you supported their policies up to now, the idea that a single rich individual can purchase one of these platforms and flip its political orientation should trouble you. These large platforms do not have the capacity, incentives, and more importantly, the legitimacy, to be the custodians of American (or any other country’s) democracy. Leaving content moderation up to them is like leaving a loaded gun on the table, and hoping the person sitting opposite you will not pick it up and shoot you.

On the other hand, we should not be happy with the government exercising this kind of control, either. Conservatives in recent years have abandoned their earlier libertarian postures of believing the government should stay out of people’s lives, to one where they want to use government power to control speech. While both the United States and other modern democracies have used state power to put some guardrails around older forms of broadcast speech, this power in a polarized internet age is not something we should welcome.

There is, however, a third approach to this problem, which is the concept of middleware. Back in 2020, I led a Stanford Working Group on Platform Scale, where we argued that the content moderation function needed to be outsourced from the big platforms to a competitive ecosystem of middleware providers who could filter platform content according to the user’s individual preferences. Currently, we are fed Facebook, Google, or Twitter content according to their internal algorithms over which we have no control. Middleware would restore that freedom of choice to individual users, whose agency would return the internet to the kind of diverse, multiplatform system it aspired to be back in the 1990s.

Our working group never got past the point of coming up with a business model that would make middleware succeed. There are a couple of existing middleware-type content providers like NewsGuard and SmartNews, the latter a Japanese news aggregator that allows users to filter content according to their political preferences. As time has gone on, I have become increasingly convinced that middleware is not just one possible solution to the online freedom of speech question, but rather the only one that promises to actually solve the problem. Neither platform self-regulation, nor the forms of state regulation coming down the line, will do the job.

I will be posting an interview on my Frankly Fukuyama YouTube channel with Daphne Keller on this subject soon.

TechnologyUnited StatesFrankly Fukuyama