You've successfully subscribed to American Purpose
Great! Next, complete checkout for full access to American Purpose
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your newsletter subscriptions is updated.
Newsletter subscriptions update failed.
Success! Your billing info is updated.
Billing info update failed.
AI: Democracy's Ally or Enemy?

AI: Democracy's Ally or Enemy?

If artificial intelligence isn't shaped with democratic principles in mind, it will be transformed into tools that weaken them.

Beth Kerley

Advances in artificial intelligence (AI) are changing the playing field for democracy. Since social media’s emergence as a tool of protest, commentators have regularly stressed how our evolving technological landscape is changing our political world. The digital tools on which we rely help to determine how people express themselves, find like-minded communities, and initiate collective action. At the same time, these technologies also affect how governments of all political stripes monitor people, administer services, and, in settings where democratic guardrails are weakened or absent, dole out repression. 

Thanks to recent leaps in the development of large language models, the global proliferation of AI surveillance tools, and growing enthusiasm for the automation of governance processes, we are poised for another seismic shift in the balance of power between people and governments. Yet there remain many questions around how democratic norms and institutions can be brought to bear in shaping the trajectory of AI. Since the “liberation technology” buzz of the early 2010s, experts and publics alike have grown more skeptical of assumptions that technological development will automatically advance values such as free expression and freedom of association or sustain a level playing field for civic engagement. Digital tools that foster open communication can also make it easier for repressive regimes to surveil and harass opponents, or skew public debate by amplifying conspiracy theories and state propaganda disproportionately. 

With more than enough evidence to hand that digital advances do not inevitably work to democracy’s benefit, democracy’s advocates must act to erect guardrails around AI development and deployment. The determination of authoritarian countries to integrate AI into their repressive toolkits only heightens the urgency of this task.

new report from the National Endowment for Democracy’s International Forum for Democratic Studies highlights principles that should inform an approach to AI that is rooted in democratic values. The report emanates from a conference with tech and democracy experts, especially in the digital rights and open government communities, across a number of countries. It scrutinizes the popular narratives and social structures that are shaping the AI landscape, obstacles that stand in the way of upholding democratic norms, and the ways in which civil society can intervene to ensure that AI evolves in a way that is friendly to democracy. 

A key theme is the need for democracy’s allies across government, media, civil society, and the private sector to see the human agency behind AI models, rather than thinking of technology itself as an agent. This recognition is critical not just for theoretical reasons, but to ensure democratic accountability in the age of AI. Like all flawed human structures and choices, the human decisions, assumptions, and power structures that feed into AI systems must be open to scrutiny. As one conference participant emphasized, “It is not the computer program . . . but rather the people and institutions that create and implement it” that should be held responsible for upholding human rights norms.

At the most basic technical level, AI models reflect the societies that produce their data, the inequalities that shape who is or is not represented in datasets, and the choices or assumptions of developers. AI systems designed in one country may cause unexpected harm when exported to settings with different governance structures or different digital or data infrastructure. One Latin American participant warned the group to “beware of datasets designed in the Global North.”

Relationships between sellers and buyers of AI technology shape how it is deployed in the public sector. AI-enabled tools for law enforcement, for instance, may sometimes serve political or commercial ends as much as or more than public safety. To better understand the logic behind the deployment of Huawei facial recognition cameras in Belgrade, activists unpacked the broader Serbia-China cooperative relationship that underlay the project. 

Despite the new challenges AI presents, rooting discussion in familiar democratic values can help to illuminate the stakes. For instance, if a government agency cannot explain a decision it has made with the help of an AI system, then such a decision violates principles of due process and government accountability. Whether at the level of design, procurement, or deployment, focusing on human choices will be critical to holding institutions accountable as AI systems come to play a growing role in how they are run. 


Another theme the report highlights is the urgency of equipping democratic societies and institutions to keep up with a set of AI harms and risks—near term, mid-term, and long-term—that are constantly evolving with the technology itself. The data used to train models, the outputs they generate, and the inferences they draw can all endanger privacy—a pillar of free thinking and free expression in a democracy. AI’s privacy risks are growing in scope thanks to recent advances, and while they include data falling into the hands of vendors in authoritarian countries (most prominently the People’s Republic China), they are much broader than this. 

While traditional concepts of data protection emphasize a specific category of “personally identifiable information,” AI systems can piece together other types of data, such as location—even when anonymized—to determine someone’s identity or the demographic group to which they belong, a capacity that enables algorithmic discrimination. They can also draw inferences about personal attributes based on anonymous online posts, undermining the digital anonymity that has long been a valuable shield for dissidents in repressive settings.

Other fundamental democratic principles such as government accountability, equality under the law, and labor rights are also affected by AI technologies. Though sometimes misleadingly hailed as impartial, algorithmic systems can end up amplifying bias and exclusion. The algorithmic distribution of public benefits can, for instance, leave behind people whose circumstances are not adequately captured by a given model or who are falsely flagged as fraud risks. Members of marginalized groups are more likely to be misidentified by facial-recognition cameras and penalized by automated hiring systems. AI systems used as management tools, whether in the gig economy or by traditional employers, can pose new challenges to labor rights when there is no “human in the loop” available to hear appeals against wrongful penalties. 

Critically, many of these challenges require social and political responses as much as technical ones, and in some cases they demand complex trade-offs between competing values. For instance, privacy benefits when systems collect only the minimum data required, but equity may be better served by collecting sensitive demographic data in order to be able to test for bias. Similarly, keeping AI development within a few large companies raises concerns about opacity and the concentration of power. Open models, on the other hand, might more easily be coopted for anti-democratic projects such as using “deep fakes” to mislead voters or flooding the information space with hate speech, threats, and harassing content aimed silencing one’s opponents. Finally, if digital policies and regulations are not carefully tailored, governments and malign actors can abuse them for purposes contrary to democratic values—something we have already seen play out with data protection laws, invoked by officials in Brazil to resist sharing public information and by kleptocratic enablers in an attempt to chill critical reporting. 

These fundamentally social and political challenges mean that open discussion on managing the evolution of AI technologies must extend beyond the technical community. With tech talent and proprietary information concentrated in the private sector, members of the public and their democratic representatives are at a disadvantage when seeking to regulate, purchase, or even simply use AI systems. New strategies, processes, and collaborations are needed to give real force to democratic principles of transparency, accountability, and privacy in this area. Established norms, learning processes, and institutions that can address the impacts of AI are either nascent or completely absent across many settings. 

Civil society can help to close these gaps through awareness-raising and litigation, engagement with government institutions on laws and norms, and promoting responsible approaches to AI development. In both regulatory and development processes, including people and communities likely to be impacted from the ground up can also help to ensure more robust rights protections. 

As AI impacts many different aspects of social and political life, addressing the challenges it poses will require more than just technical skills. This means that activists will need to forge new partnerships and knowledge-sharing initiatives to take these issues on—whether by collaborating with independent journalists and labor unions or working to close divides between traditional and digital human rights groups. Such partnerships can help provide a fuller understanding of the challenge by connecting communities that in the past have tended to focus on different aspects, such as open data activists and advocates for privacy. 

On the other hand, civil society organizations with strong technical skills can also leverage digital innovation to deepen democracy and hold institutions accountable. From Hungary to Brazil and Peru, activists and journalists have designed AI tools to help citizens make sense of public information or flag indicators of corruption. Such projects can even help to counter information asymmetries around AI itself—for instance, by enabling researchers to identify facial recognition purchases in procurement documents. Tech-savvy activists can pinpoint the vulnerabilities and risks posed to human rights by government or corporate AI systems. Meanwhile, researchers are exploring the use of AI for “collective intelligence,” enabling new forms of public engagement in decision-making, including decisions about how to govern AI.

The ways in which AI impacts us will hinge on how well democratic mechanisms are working to uphold government transparency, support deliberation, and engage affected communities in decision-making. AI’s trajectory depends in part on the health of democratic institutions, and the health of democracies will likewise be affected by our choices around AI. If we are to ensure that these choices reflect the full range of social, civic, and human rights concerns at stake, civil society will have an important role to play in determining how we use and live with AI.

Beth Kerley is senior program officer with the International Forum for Democratic Studies at the National Endowment for Democracy. This article is drawn from the International Forum’s recent report “Setting Democratic Ground Rules for AI: Civil Society Strategies,” based on a workshop held in Buenos Aires, Argentina in May 2023. The author thanks Maya Recanati for her assistance.

Image: A chess board. (Unsplash: Alexander Mills)

AuthoritarianismChinaDemocracyEastern EuropeEuropeLatin AmericaMiddle EastPolitical PhilosophyTechnologyUnited StatesU.S. Foreign Policy