You've successfully subscribed to American Purpose
Great! Next, complete checkout for full access to American Purpose
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your newsletter subscriptions is updated.
Newsletter subscriptions update failed.
Success! Your billing info is updated.
Billing info update failed.

World Wide Weapon

Democracy cries out for policy solutions to tech-fueled extremism.

Steven Hill
Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing
by Chris Bail (Princeton University Press, 240 pp., $24.95)

An author writing a political nonfiction book has to try to see into the future. From finished manuscript to bookstore delivery can easily take two years. A lot can change in that time. Chris Bail’s Breaking the Social Media Prism appears to have fallen victim to this “time lapse gap.”

On January 6, marauding mobs attacked the U.S. Capitol. The attack followed the October 8 arrests of thirteen men in a plot to kidnap Michigan Governor Gretchen Whitmer. These and other domestic terrorist activities were largely incited and planned over Facebook, YouTube, Twitter and other digital media. Yet on April 6, Bail published his book in which he claims, “There is very little evidence to support much of the popular wisdom about polarization on our platforms today.… There is surprisingly little evidence to support the idea that algorithms facilitate radicalization.”

How did Bail come to this contrarian conclusion? He is a “computational social scientist” who studies large data sets of humans’ digital trails, often on Silicon Valley platforms. In addition to other studies the book presents, he and his colleagues at Duke University’s Polarization Lab have focused their research on what they label “extremists” and “moderates” on digital platforms. In one experiment, Bail and his colleagues built two Twitter bots, one that would retweet messages from “prominent” Republicans to Democratic users, the other from “prominent” Democrats to Republican users. More than twelve hundred Twitter users were each assigned a bot, receiving one post every hour for a month without being given more information about what was being tested or measured.

In theory, exposure to arguments from the “other side” should push partisans out of their echo chambers and filter bubbles, and moderate some of the Twitter users’ most extreme opinions. Instead, the opposite happened: Democratic users shifted further to the left, Republicans further to the right. Why? When faced with a barrage of opposing arguments, even slightly partisan users, the researchers concluded, responded defensively, often egged on by the “likes” and “shares” of fellow partisans.

This sounds plausible, at least superficially, but in drilling down deeper neither Bail’s conclusion nor his methodology is convincing.

For starters, Professor Bail’s idea of an “extremist” is not very extreme. The most “extreme” character depicted in Bail’s experiment is “Ray,” who is a mild-mannered, Clark Kent-style conservative reluctant to discuss politics or religion in person, but who goes postal when online. He tries to goad Democrats by attaching photos of Democratic leaders to images of human excrement or pornographic acts. While such rude pests like Ray might antagonize his targets, it’s really not all that different from the political bile expressed in tabloids since the early days of the American republic.

Thomas Jefferson hired James Callender to publicly malign Alexander Hamilton, John Adams, and even George Washington. In particular, Jefferson begged his ally James Madison to attack Hamilton: “For god’s sake, my dear sir, take up your pen, select the most striking heresies, and cut him to pieces in the face of the public.”

The experiment by Bail and his colleagues would have been more credible if they had chosen messengers who are not such lightning rods for their recipients. Using Nancy Pelosi and Planned Parenthood on the left, or Mitch McConnell and Breitbart News on the right, as your “prominent” messengers is like waving a red flag in front of a bull.

The Real Extremism

More importantly, the set-up of this rather clunky experiment causes me to wonder if Dr. Bail really understands how online extremism actually works. This kind of behavior from Ray and other experiment subjects, which so fascinates Bail and his fellow researchers, is really not the type of digital-media-incited extremism and radicalization that society needs to be concerned about. Examples of real concern include Joseph M. Morrison, ex-Marine and leader of the Wolverine Watchmen, who was arrested for hatching the plot to kidnap Governor Whitmer; Adam Fox, the Whitmer plot’s alleged mastermind; and Barry Croft, another Whitmer plotter, bomb-maker and a national leader of the Three Percenters militia group, whose members were prominently involved in the January 6 attack on the Capitol. Croft says God has granted him permission to commit murder.

These three and many of their ten co-conspirators were frequent users of Facebook, Twitter, YouTube and other digital media platforms. Their social media feeds were filled with far-right disinformation and mutual incitement. The Wolverine Watchmen used Facebook to recruit new members and to praise fellow radicals like Kyle Rittenhouse, the seventeen year old who shot three protesters, killing two, with an AR-15 assault rifle during Black Lives Matter unrest in Kenosha, Wisconsin last August.

This motley crew, along with extremist groups like the Proud Boys and Boogaloo Bois right-wing militias, are the real deal. Among the features that make them most dangerous is that they congregate not in the online bickering arenas on which Bail focuses but in clandestine Facebook Groups. Launched in 2010, Facebook Groups can be walled off as “private”—secured as “hidden” or secret and accessible only to admitted members, who must invite new members. The violent anti-government movement has used hundreds of Facebook Groups, under an array of code names, where followers have circulated links, as an investigation by the Tech Transparency Project has found, to manuals on bomb construction, kidnapping, making flash stun grenades, snipers, and murder. Some of these groups have had thousands of members.

Other secret meet-ups on the internet can be found on platforms like subreddits, Parler, Signal, Telegram, WhatsApp, and chat rooms on 4chan and 8kun. Some of these are encrypted against unauthorized entry into private groups by either the government or the platforms themselves. What the internet has facilitated, according to Stanford law professor and researcher Nathaniel Persily, is the creation of hidden digital hideouts in which real extremists can “make common cause,” unconstrained by real-world geography, with “people they would not find in their neighborhood or in face-to-face forums.”

As political scientist Joshua Tucker has explained to the New York Times, before the rise of social media, if you were the only person in your area who had extremist views about the overthrow of the U.S. government, organizing with like-minded but geographically dispersed compatriots would be costly and logistically difficult. Now, the use of digital media “drastically reduces these costs and allows such individuals to find each other more easily to organize and collaborate.” This capacity of digital platforms is powerful—and dangerous, much as a firearm, a shoulder-fired rocket, or any other weapon is powerful and dangerous.

The ability of violent extremists to use tech platforms to reach large numbers of people is the greatest concern, says Katie Paul, director of the Tech Transparency Project: “That increases the number of people who may be unstable,” and even one of them could have easy access to dangerous information, making a “lone-wolf attack” possible. Researcher Mia Bloom of Georgia State University has studied the effective use of digital media by terrorist organizations to expose people, mostly young men, to radicalizing messages with the goal of “changing their worldview and eventually guiding them to act.” The 2013 case of the Boston Marathon bombers, she writes, marked a beginning: On digital platforms, two Muslim brothers easily found sources of radicalization and an article by al-Qaeda titled, “How to Build a Bomb in Your Mom’s Kitchen,” which taught them to devise their bombs from pressure cookers filled with explosives.

“Before,” Bloom writes, “individuals would receive guidance and training in person. Now, these same groups simply inspire individuals to carry out attacks on their own, for which the group can claim credit if they are successful. We call that ‘self-radicalizing.’” In the five years after the Boston bombing, the number of social media platforms disseminating terrorist propaganda increased tenfold.

Compared to this activity, the extremism and polarization studied by Bail is fairly tame stuff—and of dubious practical value, since it doesn’t seem to recognize the real sources of danger on algorithmically driven digital platforms. Breaking the Social Media Prism makes only a single mention of a Facebook Group, organized by what Bail calls an “extreme liberal from Texas.” Its primary activity? To “meet for lunch” at the state capital.

From Russia without Love

Bail also dismisses the impact of Russian disinformation in the 2016 election, citing a study finding that 80 percent of Russian disinformation went to less than 1 percent of Twitter users. But that’s not very comforting: One percent of Twitter’s 68.7 million U.S. users amounts to the exposure of more than 650,000 people in a presidential election that was decided by just 70,000 voters in the three battleground states of Michigan, Pennsylvania, and Wisconsin. In the U.S. Senate, where the Republicans had a two-seat majority before the election, three GOP seats were each won by a margin of three points or less (two of them in Pennsylvania and Wisconsin). Close elections can be decisively impacted by factors like campaign messages targeted at narrow demographics of undecided swing voters in key states, as well as super-motivating a candidate’s most loyal base supporters. Twitter, Facebook, and other digital media have developed micro-targeting technologies that have taken such long-used campaign tactics and pumped them full of steroids.

Moreover, even if Russian disinformation did not change a voter’s substantive political views, it can strongly motivate an already-leaning voter to convince ten other people; that type of second-tier impact would not show up in Bail’s measurements, even when multiplied by tens of thousands of voters. It might also create so much confusion for some voters over which candidates support what policies that those voters don’t vote at all, a well-known tactic for voter suppression.

These are just a few of the methodological issues that pervade this book. Bail doesn’t even attempt to account for these sorts of extenuating factors, either in the design of his experiments or in his conclusions.

Bail also presumes to speak for “moderates,” as he defines them, hoping that his work in reducing platform polarization will encourage moderates to engage more in online political discussions without being run off by rude “extremists” like Ray. Bail claims that “social media allows people to present different versions of themselves, monitor how others react to those versions, and revise their identities with unprecedented speed and efficiency.” The result is “false polarization,” the “tendency for people to overestimate the ideological differences between themselves and people from other political parties.”

But the vast majority of people use the major digital platforms, especially Facebook and its companion businesses Instagram and WhatsApp (which, at a cumulative count of 5.5 billion users, amount to over half the world’s population), for simpler matters, like staying in touch with friends and family or sharing vacation and puppy photos. For most of them, it’s their own private post office. Smaller subsets of regular users share news and political information, but generally among their “friends” and “followers,” who tend to agree with them. This means there aren’t many regular online users “revising their identities,” which would be hard to do amidst their gallery of selfies and happy birthday wishes. Nor does much nasty partisan bickering seem to occur among the vast majority of users.

Instead, our focus should be on the impact of the huge audience size, the frictionless amplification of information, and the hyper-targeting capacity—all without geographic constraint—that have been unleashed by these new technologies. These unprecedented “tools of virality” are being deployed by real extremists—including a small number of super-charged political actors with huge numbers of followers, like Donald Trump—to not only reach an enormous audience but to “long tail” micro-target relatively small niches of users with their warped versions of reality. These increasingly alarming features are having a disproportionate impact on our political discourse and electoral outcomes.

The right kind of research would help us better understand the consequences that occur when “engagement algorithms” amplify hair-raising spectacles like the livestreaming in real time of child abuse or pornography (as reported by the New York Times in graphic detail), or the Christchurch mass murderer of Muslims who broadcast his carnage over Facebook, with the event then seen by millions on YouTube. As documented by research organizations like the Stanford Internet Observatory, the platforms and their technologies have enabled the broadcasting and amplification of such atrocities and extremism on an unprecedented scale.

These digital platforms have been cleverly used by political actors to foment everything from spoiled elections to genocide. Members of the Rohingya Muslim minority in Myanmar were brutally massacred at the behest of a Buddhist leader and his fanatical followers using Facebook and other digital media to whip up anti-Rohingya hysteria. Pro-democracy activists in the Philippines have been harassed and murdered by a quasi-dictator, President Rodrigo Duterte, who has used Facebook not only to get elected but to foment violent paramilitary attacks against his opponents, and has used YouTube and Twitter to spread “deep fake” videos of political opponents.

Bail’s research, and much of the research he cites, seem to tiptoe around the core extremism and polarization being incited by bad actors who take advantage of the digital platforms’ unique communication and manipulation capacities.

Breaking the Social Media Prism does include some good discussion of the latest social science research, and Bail is a helpful guide through the literature. The extensive endnotes, bibliography, and appendix are themselves treasures for research geeks and readers who want to immerse themselves in this material. Bail also does a good job of connecting his work to its historical antecedents, as with the origins of the term “echo chamber” in The Responsible Electorate, published by V.O. Key, Jr., in 1966.

Reimagining Facebook

The book’s most valuable contribution lies in its account of an experiment in which Bail and his colleagues at Duke’s Polarization Lab imagine an improved Facebook-like platform designed to try to bring together users from opposing political parties in direct online conversation to see if it could reduce polarization and produce content with bipartisan appeal.

“Instead of boosting content that is controversial or divisive,” writes Bail, “‘like’ counters and numbers of ‘followers’ could be replaced by meters that show how people from across the ideological spectrum respond to people’s posts in blue, red, and purple.” New incentives and status signals could be created, he says, like publicly displayed merit badges for users who attract diverse audiences, and leader boards that track the frequency with which prominent users appeal to people from both parties.

It’s an inspired and hopeful vision. Bail and his colleagues do in fact design a new social media app, called DiscussIt, but, for reasons unexplained, do not incorporate these elements into it. Instead, their platform tried a very different approach: The online discussants would be anonymous and use androgynous names so that the absence of political sympathies, race, and gender could potentially lower the temperature of their online interactions.

The Polarization Lab tried a test run of DiscussIt with twelve hundred subjects, Republicans and Democrats. For one week, they responded to a discussant partner who, unbeknownst to them, was from a different political party. Bail reports that they showed “significantly lower levels of polarization after using the platform for just a short time” and “expressed fewer negative attitudes toward the other party or subscribed less strongly to stereotypes about them.” He concludes, “The results of the experiment make me cautiously optimistic about the power of anonymity.” He cites several other studies of anonymous users that support his optimism.

It is an interesting approach, and one can appreciate the overall attempt to rethink the design of Facebook and other digital platforms. Yet unlike Bail, I am deeply skeptical about the power of anonymity. In spite of his experiment, online anonymity and its malicious twin, “false identity,” have allowed real extremists, internet trolls, and bad political actors to wreak much havoc. The notorious digital platform 4chan—a hideout for hackers, hate purveyors, child pornographers, murderers, and other digital misfits—features anonymous, anything-goes forums. Anonymity is what allowed Jefferson to secretly hire Callender to attack Washington and Hamilton. To think that anonymity could somehow be the key to reducing online polarization is unsupported by centuries of common experience. And it remains to be seen whether Bail’s one-week experiment could be scaled to include millions of people and longer periods of time.


I came away from this book questioning whether researchers like Bail have a strong handle on what needs to be studied in order to understand the powerful impacts of digital platforms on polarization and radicalization. It is quite possible that the efficacy of their experiments and research is limited, like an algorithm itself, by the quality of the data available for their computations. Silicon Valley businesses are notorious for refusing to provide data access, so Bail and his colleagues simply may not have access to the quality datasets needed to credibly study these impacts. That data limitation, in turn, appears to be determining what is studied and researched.

In the meantime, the meta-message from Breaking the Social Media Prism is that the digital platforms are not the problem. Bail cites the American National Election Study conclusion that “the American public has not grown more polarized,” but that study is focused on the general population, not on the internet-empowered extremists and self-radicalizing lone wolves who shuttle in and out of Facebook Groups and other platform hideouts. Yet Bail doubles down, writing, “Our focus on Silicon Valley obscures a much more unsettling truth: The root source of political tribalism on social media lies deep inside ourselves.” This has a ring of truth, but Bail never acknowledges the ways in which the unique powers of digital platforms have pushed the tribalist tendencies of a small but dangerous faction of people into overdrive.

And Bail’s complacency has political ramifications. If the data and research do not provide clearer and less ambiguous answers about impacts, many will continue to conclude that there are few worthwhile policy interventions until we “better understand” how these technologies work. Indeed, Bail discounts the efforts of policymakers and ex-industry experts (“Silicon Valley apostates,” he calls them) who, he believes, propose “untested interventions.” Instead, despite a lack of cooperation from the platforms and the apparent inability of Bail and his fellow researchers to reach any solid or helpful conclusions, he proposes a seemingly endless maze of research and testing. His anodyne conclusions let the platforms off the hook for the way they have created dangerous technologies that frictionlessly network the most extreme elements of our societies.

The type of research that is needed should be based on experiments that test a completely new business model for the Big Tech platforms. That would mean converting the platforms into investor-owned utilities; limiting their “surveillance advertising” revenue model; increasing information friction; and dialing down their unlimited reach, micro-targeted engagement, “dark pattern” design, and amplification algorithms. It would mean subjecting the platforms to a “data oversight agency” like the newly created California Privacy Protection Agency, which, among other tasks, would make suitable data available to researchers like Bail.

But those types of bold experiments are absent from Breaking the Social Media Prism. So, policymakers will have to push forward as best they can, figuring out how to rein in Big Tech, without much help from Professor Bail.

Steven Hill is former policy director at the Center for Humane Technology and co-founder of FairVote. He is author of seven books, including 10 Steps to Repair American Democracy.

TechnologyUnited StatesDemocracyBook

×

Dear Friends,

We’re growing as a community, vibrant and diverse. In these early days, it’s our pleasure to offer our content for free. We also depend on your kindness and generosity.

With warmest thanks,

Jeffrey Gedmin, Francis Fukuyama, and the American Purpose team