While generative AI tools could offer significant benefits in fields such as medicine, manufacturing, and education, how they are applied to electoral politics must be carefully regulated. Otherwise, they will undermine, rather than strengthen, rule by the people.
By Kelly Born, Director of the Democracy, Rights, and Governance initiative at the David and Lucile Packard Foundation
SILICON VALLEY – Predicting how generative artificial intelligence might affect democracy is a formidable challenge, given that its potential applications are still largely unknown – and seem virtually limitless. While narrow AI tools, designed for specific tasks like reconciling voter records, are already in use in various countries, the impact of generative AI is harder to foresee. This technology is not merely another app, like a social-media platform, but rather a foundational technology more akin to the emergence of the internet itself. It will influence democracy both directly, transforming the mechanics of elections and governance, and indirectly, as it threatens to shift the very foundations of information ecosystems, public trust, and opinion.
In terms of direct impact, generative AI could revolutionize policymaking by enabling a more accurate and nuanced understanding of potential policy outcomes. Organizations like Climate Change AI, for example, already use this technology to explore how “roads, power grids, and water mains must be designed to account for the increasing frequency and severity of extreme weather events.” Law-enforcement agencies use it for surveillance and predictive policing. More recently, lawyers and judges have begun to employ generative AIs like ChatGPT to assist them in filing cases and even in issuing court rulings.
Meanwhile, concerns are growing regarding how generative AI will affect elections. At least 45 countries will hold elections in 2024, including globally consequential races in the United States and the European Union.
“That said, although AI can generate content, it still requires distributors like Facebook to reach an audience. AI labs and social-media platforms must work together to develop effective mechanisms to prevent the spread of disinformation.”
While narrow AI is already helping to streamline election administration, generative AI could introduce new biases and uncertainties. America’s highly decentralized election system, for example, encompasses 10,000 jurisdictions, with each state maintaining separate voter records that must be constantly updated as voters move, die, or otherwise become ineligible. Narrow AI systems are now extensively used in this process, and while they can boost efficiency, early evidence suggests that the algorithms used to maintain voter rolls struggle with matching Asian names and may be biased against minorities in general. Similar biases have been identified in the use of AI for signature verification, a common requirement for mail-in ballots.
But AI’s influence could easily extend beyond merely administering elections; it could also be used to help shape electoral rules and structures, affecting how competitive elections are, the degree of political polarization, voter turnout, and candidate incentives. As many as 90% of US Congressional districts are considered “safe” from a partisan perspective – meaning that the outcome is typically predictable, favoring either Republicans or Democrats. There are now dozens of apps helping legislators draw district lines. While advanced generative AI programs could be used to improve the fairness and representativeness of the US political system, they could just as easily enable even more repressive partisan gerrymandering by political incumbents, further shielding parties and candidates from genuine competition.
“AI is likely to disrupt labour markets profoundly, with all of the attendant political fallout. It may also reshape the information ecosystems that governments, candidates, and voters rely on.”
Beyond redistricting, generative AI could facilitate other structural reforms. European countries, for example, use a mix of closed and preferential voting. By contrast, the US uses single-member winner-take-all districts. Reformers seeking to promote moderation and reduce polarization debate the merits of reforms such as ranked-choice voting, open primaries, and proportional representation. Yet the potential impact of these changes across diverse political environments and contexts, and their potential interaction effects when combined, remain unclear. Generative AI could shed light on these complex dynamics, enhancing our ability to estimate electoral reforms’ long-term effects.
While these changes would directly influence the mechanics of democracy, the indirect effects are perhaps more concerning. AI is likely to disrupt labour markets profoundly, with all of the attendant political fallout. It may also reshape the information ecosystems that governments, candidates, and voters rely on. Generative AI could prove to be a valuable asset for journalists, streamlining tasks such as summarizing government hearings, organizing contributions from various sources, and even assisting in the drafting and editing of articles. And it could result in even more journalists being laid off.
Social-media platforms could use AI to moderate online content, combating the spread of election disinformation and sparing human content moderators from the often traumatic task of screening the internet’s most offensive content. But generative AI will also almost certainly exacerbate today’s disinformation crisis, making it easier to craft highly personalized, persuasive content that can be tested, tweaked, tailored, and targeted across all media. In April, the Republican Party released its first-ever AI-generated attack ad against US President Joe Biden. It is not far-fetched to imagine a political landscape flooded with cheaply produced ads (or even podcasts) using the voices or images of trusted sources and crafted to manipulate specific audiences, using personal online histories to identify and exploit psychological vulnerabilities. Whether this really is a game-changer remains to be seen.
“While narrow AI is already helping to streamline election administration, generative AI could introduce new biases and uncertainties.”
Generative AI’s capacity to create persuasive disinformation in multiple languages could also be a boon for foreign adversaries, previously plagued by a lack of language and cultural fluency. At the same time, pro-democracy advocates could use these tools to develop more persuasive anti-authoritarian messaging micro-targeted at the most vulnerable communities. As it stands, however, democratic voices are significantly outnumbered and outgunned.
That said, although AI can generate content, it still requires distributors like Facebook to reach an audience. AI labs and social-media platforms must work together to develop effective mechanisms to prevent the spread of disinformation.
It is important to note that these foundational language-learning models are as biased as the corpus of human history on which they are trained. As such, they favor cultures with a larger body of written and digitized materials (English and Cantonese are the preferred languages), and the histories of conquerors are overrepresented. Such biases can be extremely dangerous, particularly at a time when political polarization is on the rise across liberal democracies and 40% of Americans deny the outcome of the 2020 presidential election.
At its core, democracy depends on citizens’ trust in both leaders and institutions to represent their interests. But trust is fragile and must be safeguarded. While generative AI could offer significant benefits in fields such as medicine, manufacturing, and education, its impact on democracy must be carefully considered. Otherwise, it will undermine, rather than strengthen, rule by the people.
© Project Syndicate 1995–2023