AlgoSoc / Opinion
June 28, 2024

Caught between grand ambitions and small print: The AI Act and the use of GenAI in elections

How will generative AI affect elections? On the evening of the European Parliament elections, the alarms are sounding: “2024 will be a litmus test for AI’s effects on elections — and voters’ faith in them”. “AI deepfakes threaten to upend global elections. No one can stop them”. These alarmist headlines from across the globe, including reputable outlets like The Guardian and The Washington Post, highlight the widespread concern about how generative AI might disrupt elections. But what is all the fuss about?

Generative AI technologies offer a new chapter in the playbook of political campaigns and actors wanting to impact elections and their integrity. A lot of attention is focused on how deepfake videos might affect vote choice. The “smoking AI gun” that everyone is waiting for is an AI generated video that goes viral in the final 24 hours of an election and flips the outcome in the opposite of the expected direction. While attention still needs to be on this kind of effect, (generative) AI tools can affect many more aspects of an election.

The recent ‘tech accord’ signed by most major AI companies made many promises, including limiting the creation of deceptive AI election content, adding provenance signals to identify authentic content, trying hard to detect AI generated content, providing swift and proportionate responses to take AI generated content down, and using the 2024 elections as a learning opportunity — also with an eye to increase public awareness. This all sounds wonderful, but the jury is still out as to how this is implemented, how effective it is, and how well the tech companies will actually do.

And why should tech companies, and also political parties, get into action? If not for anything else, the EU AI Act will soon offer a framework which mandates actual behaviour and not just promises and good intentions. Or not?

The declared goal of the AI Act is to ensure the uptake of human-centric and trustworthy artificial intelligence while ensuring a high level of protection, among others, for democracy and fundamental rights, such as the right to freedom of expression, non-discrimination, and political participation (Art. 1 (1), AI Act). Addressing the use of AI in democracy and protecting democratic values is hence a core ambition of the Act. So how will the AI Act contribute to making the use of GenAI in campaigning safer?

To begin with, the AI Act will turn most of the voluntary self-obligations from the tech accord into mandatory legal requirements. Providers of GenAI models, but also political parties, will have to make sure that AI generated content is recognisable as such. Providers of at least the Very Large GenAI systems will also have to engage in systemic monitoring for any risks, including “any actual or reasonably foreseeable negative effects on democratic processes” (Recital 60m, AI Act).

But the AI Act goes further… late in the negotiations, the list of high-risk AI systems was expanded with “AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda” (Annex III, 8(aa), AI Act). The phrasing itself already triggers a range of definitional questions that cannot be discussed here. For now, suffice to say that adding the use of AI in campaigning to the list of high risk AI can potentially address a range of urgent concerns that were not yet addressed in the tech accord.

For example, the obligation to implement and maintain a continuous risk management system for all kinds of risk, not only from deceptive use of GenAI. Such risks could include the misuse to create political advertising campaigns to dissuade voters from voting, or the mass fabrication of personalised political messages. The requirements for the quality of training, validation, and testing data could help to make sure that GenAI systems do not internalise biases that can lead to discriminatory outputs or are not sufficiently representative. Potentially this could, for example, mean that the AI system cannot be trained on data from public fora where men are strongly overrepresented, from social media platform data where there are a lot of inaccuracies about politics, or fail to exclude bot-generated content from Russia and China in the training data.

In other words, the AI Act could add important substantive safeguards for the responsible use of GenAI in elections. At least one devil, however, sits in the distribution of responsibilities along the value chain. With GenAI as a form of General Purpose AI, it is in principle not Open AI or Google that decides if the AI is ‘intended to be used to influence the outcome of an election’ but the political party. Many GenAI providers even exclude in their Terms of Use the use of their systems for campaigning. Open AI, for example, explicitly excludes the use of ChatGPT to engage in campaigning. Following the logic of the AI Act this would mean that the responsibility for ensuring compliance, for example, with the systemic risk monitoring provisions or the data quality requirements, obligations pass on to the entity that modifies the intended purpose, in this case: the political party. The result may not be sensible on some accounts — a party using GenAI as part of its campaign should be expected to monitor for any risks from doing so.

Other obligations, however, such as the obligation to ensure the quality and representativeness of the training data, lie outside the sphere of control of the political party. In theory, the AI Act foresees that the original provider, e.g. OpenAI, will still remain obliged to lend their assistance to enable compliance unless it has expressly excluded the use of campaigning in its Terms of Use. Concretely, this means that political parties using GenAI systems from these companies will not only be potentially in breach of OpenAI's Terms of Use, but also that the largest chunk of regulatory burdens will fall to political parties using Generative AI.

Seeing that political parties will not be able to address many of the concerns around training and transparency without the collaboration of the original developers, this is an unfair outcome and one that can potentially deepen the digital divide between rich and less well-funded political parties. The AI Act’s current distribution of responsibility along the value chain risks leaving unaccounted-for generative threats to the democratic process. Simply signing off from responsibility via the Terms of Use cannot be the solution.

This article was first published on Internet Policy Review on June 7, 2024.

More results /

/ health
From dialogue to decision

By Leonie Westerbeek • November 22, 2024

Discriminerende algoritmes

By Corien Prins • May 18, 2021

/ media
Content moderation and platform observability in the Digital Services Act

By Charis Papaevangelou • Fabio Votta • May 29, 2024

Journalism in the age of AI, platformisation, and regulation

By Agustin Ferrari Braun • Charis Papaevangelou • May 27, 2024

Subscribe to our newsletter and receive the latest research results, blogs and news directly in your mailbox.

Subscribe