Democracy Dies in Disinformation
Autocrats may think the internet is their best friend, but democracies can give facts a fighting chance.
President Joe Biden says he wants to renew America’s democratic alliances. He has outlined a new foreign policy agenda, which aims to reinvigorate democratic values and provide stronger competition to rising authoritarian powers. His administration writes about fighting kleptocracy and climate change, conquering inequality, and standing up for human rights. There’s talk about a “Summit for Democracy” to promote the new agenda.
None of it can succeed, though, without an information environment that allows arguments to be won with evidence and allows citizens to deliberate about significant issues in a full, free, and fair conversation. Such an environment does not exist today in the United States or most other democracies.
Donald Trump’s defining technological legacy was to demonstrate the vulnerability of citizens of democracies to digital exploitation. As President, he displayed a shrewd ability to manipulate our fractured information environment to his personal advantage, promoting a mix of extreme voices that inflamed polarization and nearly propelled him to a second term.
Our authoritarian adversaries, meanwhile, are busy pursuing their own information strategies. China and Russia have defined their “cyber sovereignty” agendas, based on the idea that countries are entitled to make their own rules for governing the internet even if those rules violate universal principles like human rights. U.S. efforts to oppose them have so far been contradictory and confused.
Trump may have ramped up rhetorical confrontation with China. In private, however, he undercut his public posture, telling Xi Jinping that the Chinese ruler should “go ahead” with the building of concentration camps for Uighurs. Unsurprisingly, Trump’s messaging failed to unite democratic allies around a common agenda. When Trump did push back against the Chinese tech industry, he adopted some of the same practices for which he had criticized Xi, like impulsively banning China’s WeChat and TikTok apps from the U.S. market and pressing TikTok to offer itself for sale to a Trump political supporter.
Trump tried to portray our tech competition with China as a matter of “their side” or “our side;” but since the sides were not defined by common values or even interests, it just looked like bullying, with no power to win over allies. Instead, many in the EU started to view China and America as equally bad options.
After Biden’s presidential victory, the EU leaked a memorandum arguing that national tech policies should be unified by “shared values.” Incoming Secretary of State Antony Blinken has said something similar, describing “techno-democracies” that must work together to defend their values against the world’s “techno-autocracies.” But what are these values? Do we know what a democratic information environment looks like anymore?
It used to be pretty easy to define the difference between a democratic information environment and a dictatorial one. They had censorship and state-controlled media; we had freedom of speech, pluralism, and the “marketplace of ideas.”
Today, in contrast, dictators don’t just censor by constraining the amount of media content: They exploit freedom of expression to flood the environment with so much disinformation that the truth is swamped, a sort of censorship through noise. Some of the most egregious cases occur in democracies like Brazil or the Philippines. Even in Russia and China, leaders don’t just restrict communication but overload or flood information channels. Pluralism, meanwhile, displays a polarization so extreme that it destroys the possibility of a shared reality, let alone a “marketplace of ideas.”
America, of course, is in no position to lecture anyone on the subject of disinformation or tech regulation. Instead, there is a risk that democracies will fracture even further, into “splinternets,” unable to coordinate norms and standards. If so, it will become still easier for authoritarians to set their own rules. Or, if the EU continues with projects like its proposed digital regulations to govern data privacy, platform accountability, and the economic power of “gatekeeping” internet companies, the process could shift Europe away from the U.S. tech sphere. In fact, if growing numbers of countries see little cost to devising their own rules, the world could see a cyber race to the bottom.
The best current ideas in this field come from smaller frontline states, like Estonia and Taiwan, for which finding a way to design an open yet secure information environment is often existential. But without the United States, it will be hard to make real progress.
So, where do we start? One place is Article 19 of the Universal Declaration of Human Rights, which guarantees not just the right to “impart” information but the right to “receive” information. In the words of David Kaye, former UN Special Rapporteur on freedom of expression, “viral deception” can be seen as a
tool to interfere with the individual’s right to information. Coordinated amplification . . . interferes with the individual’s right to seek, receive, and impart information and ideas of all kinds. Conceived this way, it seems to me that solutions could be aimed at enhancing individual access to information.
The first step to enforcing this right is a simple demand for transparency. Individual online anonymity is of course a right and, in many countries, a necessity. But covert, coordinated, inauthentic behavior should be illegal. Current references to Russian “meddling” or Chinese “interference” are vague and subjective. They need to be embodied in clear laws and rules.
We also need transparency around political advertising: People have the right to know who is targeting them, how they are profiled, and how much money is spent on the effort. Such transparency is not a partisan demand.
Perhaps most important, we need transparency on algorithms, which form the actual design of the online world. When Trump complained that Google pushes conservative news down and liberal news up, the simple fact is that we don’t know the answer. Platforms should provide the public with the ability to evaluate their algorithms through government oversight, academic analysis, and public interest reporting. If these algorithms are undermining human rights norms, people’s safety, or even risking lives, the companies responsible should be forced to change them.
Transparency is not an end in itself, but the necessary first step toward creating an internet grounded in human rights. The EU has already outlined first steps in the European Democracy Action Plan (EDAP), released at the end of last year, which includes transparency for political advertising, basic algorithmic accountability, and punishment for operations that seek widespread manipulation of discourse though inauthentic behavior. The United States needs to do the same. This is, of course, is the type of transparency that the Putins and Xis of this world do not want: They want to hide from their own populations the ways they use troll and bot armies to distort discourse at home and abroad, how they game domestic algorithms in service to the state’s censorship agenda, and how they suck up and manipulate their citizens’ data.
Protecting personal rights and transparency in the digital space will strengthen national security and allow democracies to push back more effectively against their authoritarian counterparts. It’s the lack of these rights that makes democracies vulnerable. While it has become a catch phrase to proclaim that values, interests, and security are intertwined, this is a case in which these concepts really are mutually-reinforcing.
Even if we are able to establish a sort of Helsinki Final Act for the internet, however, how can democracies work together to uphold these rights?
Some experts propose creating a multilateral tech alliance to offset China’s digital ambitions, safeguard the West’s technological leadership, and ensure that liberal democracies can shape emerging technologies to accord with human rights principles. But if we decide to do so, there are pitfalls to avoid.
It’s important that the alliance be organized around a broader mandate than simply countering China and Russia. Calling attention to the repression and political interference by these regimes is justified; but China and Russia are not responsible for all of democracy’s digital ills, like the rise of hate speech and disinformation. To deal with these phenomena, democracies will have to ask themselves other hard questions: Why do extreme, violent, and conspiratorial voices thrive online? Why have illiberal leaders been able to exploit these systems and accrue massive followings by peddling lies and deceit about them? To what extent are gatekeeping companies—and their disproportionate market power—responsible for allowing these practices to thrive?
Democracies will have to devise common principles aimed at restoring sanity and civility to online discourse. It would be self-defeating to focus primarily on the external challenges while neglecting the internal sores.
Furthermore, an organization designed to promote a liberal democratic internet can’t be made up exclusively of liberal democracies from North America, Europe, and East Asia. “Swing states” like India, Brazil, and South Africa are important areas in which the contest between liberal and authoritarian values is being waged. China is investing billions of dollars in the Digital Silk Road, and Russia is making new incursions into the Middle East, sub-Saharan Africa, and even Latin America, because they know that partnerships with emerging nations are the key to enhancing their geopolitical power. Instead of including only the most select and stable democracies, the goal should be to persuade states in the middle to align with a liberal democratic agenda rather than being co-opted into the authoritarian camp.
Finally, the alliance should resist the urge to tackle a less-then-coherent laundry list of issues. Instead it might well focus on these three items:
First would be to tackle unaccountable platforms, non-transparent algorithms, and exploitative social media feeds that fuel polarized perspectives, harmonizing the nascent EU regulations like EDAP and the Digital Services Act with potential U.S. regulations to provide a consistent approach to these problems.
Second would be to protect against external interference or manipulation. If evidence were to emerge about Russian leaks meant to sway an election outcome, the alliance could respond collectively through sanctions, cyber actions, and other means in order to push back against the aggression and deter future acts.
Third would be to forge a new consensus about a democratic vision for an open internet, establishing a higher set of standards for a free, open, and pluralistic information ecosystem.
Picking the Judges
Regulations and alliances by themselves will not rebuild democratic discourse. People will still choose to believe in extremist ideologies and will still be vulnerable to campaigns that seed mistrust and undermine a sense of shared reality. But regulation can help create a more transparent internet in which those who support evidence-based communication have a fair chance to compete with propagandists and extremists.
But who will these stakeholders be? It will have to be media that are committed to transmitting facts, even to mistrustful audiences, and to encouraging dialogue, even between polarized publics—in short, media committed to public service.
Today, media have little incentive to undertake such a commitment. To make money on the internet, they have to follow the ad-tech market, which rewards clickbait and polarization. If you want reliable profits in the attention economy, the easiest way to get there is to stoke hate and lies. Moreover, subscription models create, by definition, gated information communities.
One idea for breaking through the existing paradigm would be to create, within democracies, “civic media” funds that would be guided by more than the current ad-tech economy. The fund could also support the creation of online spaces run on rules dedicated to improving civic engagement: online versions of town halls, cross-partisan institutions that actually bring people together. Promising thinking is emerging in this area. Luminate and BBC Media Action, for example, propose establishing an International Fund for Public Interest Media to forestall the “extinction” of independent journalism.
EDAP has also taken a step in this direction: It mentions the need to support cross-border online journalism and “civic engagement and an active civil society, not only at election times.” Empowering citizens to take part in governance would emphasize the difference between the use of technology in democracies and its manipulation in dictatorships.
It could also help address the underlying frustrations that have led to so much carnage in democracies. The internet is now a space that concentrates powerless rage. It is an easy field for extremists and hostile states to exploit, a place where populists push propaganda and promise that they, and they alone, can help people “take back control.”
Digital tools can indeed be a way to take real control—if democracies will establish clear rules of the road while also ceding some power to citizens. Small-d democrats need the tools and a strategy to fight back.
Steven Feldstein is a senior fellow at the Carnegie Endowment for International Peace’s democracy, conflict, and governance program. His forthcoming book is The Rise of Digital Repression: How Technology is Reshaping Power, Politics, and Resistance (April 2021).
Peter Pomerantsev, a contributing editor of American Purpose, is a research fellow at Johns Hopkins University where he co-directs the Arena Initiative, dedicated to overcoming disinformation and polarization. He is author, most recently, of This is Not Propaganda: Adventures in the War Against Reality, winner of the 2020 Gordon Burn Prize.
American Purpose newsletters
Sign up to get our essays and updates—you pick which ones—right in your inbox.Subscribe