Media / Opinion
October 28, 2025

1 in 10 Dutch citizens are likely to ask AI for election advice. This is why they shouldn't

Would you trust an AI to tell you who to vote for?

When over 1,000 Dutch citizens were asked how likely they were to ask ChatGPT or another AI tool for advice on political candidates, parties, or election issues, the results were worrying:

  • 77% said it’s unlikely they’d do so
  • 13% sat on the fence (a cautious “maybe”)
  • But 1 in 10 said they’re likely to ask AI for voting advice

If we extrapolate this to the general population, this means that roughly 1.8 million citizens nationwide could be asking an AI chatbot for advice. And given how people often underreport behaviors they think are “socially undesirable”, the real number could be even higher.

This is what a survey fielded by the AlgoSoc Consortium (a collaboration of the University of Amsterdam, Utrecht University, TU Delft, Tilburg , Erasmus University Rotterdam) tells us. In the lead up to the elections, they asked 1,220 Dutch citizens how likely they were to ask ChatGPT or another AI chatbot tool for advice on political candidates, parties, or election issues.

The data, drawn from the LISS Panel of Centerdata, revealed pronounced differences across age groups in the likelihood of using AI for advice, with likelihood of using AI for political advice decreasing sharply with age. Among respondents aged 18-34, 17.2% said they were likely to use AI for such advice, while 18.6% said they might, and 64.2% said they were not likely to do so. In the 35-54 age group, only 8.4% were likely to use AI, and 15.9% might, while a large majority (75.7%) said they were not likely. Among those aged 55 and older, willingness was even lower: just 5.9% were likely and 9.9% might use AI for political advice, compared to 84.2% who said they were not likely.

Why you shouldn’t ask chatbots for advice

The Dutch Data Protection Authority (AP) recently tested several chatbots against trusted tools like Kieskompas and StemWijzer. The results were clear: these chatbots are not ready to guide voters.

According to the AP’s October 2025 report: Chatbots “surprisingly often” give the same two parties, PVV or GroenLinks-PvdA, as the top match, regardless of what the user asks. In more than 56% of cases, one of these two came out on top. For one chatbot, that number was over 80%.

Meanwhile, parties like D66, VVD, SP, or PvdD barely appeared, and others (BBB, CDA, SGP, DENK) were practically invisible, even when the voter’s positions matched theirs perfectly.

In other words, AI is biased and gets it completely wrong: “Chatbots may seem like clever tools,” said AP vice-chair Monique Verdier, “but as a voting aid, they consistently fail. Voters may unknowingly be steered toward parties that don’t align with their views. This threatens the integrity of free and fair elections.”

AI chatbots can mislead late-deciding voters

Unlike official voting aids, chatbots are not designed for politics. Chatbot answers are generated from messy, unverifiable data scraped from across the internet, including social media and outdated news.

That means they can: 

  • Mix up facts or misrepresent party positions.
  • Reflect biases baked into their training data.
  • And present all this as if it’s objective truth.

So while a chatbot might sound confident, it’s not necessarily correct.

This is worrying because voters often look for voting advice late in the campaign, when attention is high but time to reflect is short. Research shows that many people turn to tools like StemWijzer or Kieskompas just days before an election (Van de Pol et al 2014). It’s reasonable to expect the same pattern for AI chatbots, except these tools aren’t designed, tested, or transparent enough to handle that responsibility.

So, if you’re on your way to the polls tomorrow… maybe don’t stop to ask ChatGPT, “Who should I vote for?”

Many Dutch people share these concerns

It’s not just experts raising the alarm: the public is also concerned too. In our survey, nearly seven in ten (69.2%) Dutch citizens said they are worried about others receiving incorrect political information from AI tools.

This shows that most people understand the risks: chatbots might sound smart, but they’re not neutral, and their influence can quietly distort how voters see the political landscape.

What you can do instead

If you’re curious about which party aligns with your views, skip the chatbot detour and use verified, transparent sources:

🗳️ StemWijzer or Kieskompas: independent, evidence-based, and audited tools.

📰 Reputable journalistic sources that check their facts.

🗣️ Or go straight to the original sources: party manifestos and official websites.

© Image: Hanna Barakat & Archival Images of AI + AIxDESIGN / https://betterimagesofai.org / https://creativecommons.org/li...

More results /

/ algosoc
Europe wrote the AI rulebook. Can it deliver on its ambitions?

By Natali Helberger • José van Dijck • Claes de Vreese • October 27, 2025

Designing a ‘fair’ human-in-the-loop

By Isabella Banks • Jacqueline Kernahan • October 02, 2025

/ justice
The EU AI Act: Law of Unintended Consequences?

By Sabrina Kutscher • July 02, 2025

The rise of technology courts

By Natali Helberger • March 06, 2025

/ health
AI won’t save us from the hard choices in healthcare

By Maurits Kaptein • June 06, 2025

From dialogue to decision

By Leonie Westerbeek • November 22, 2024

Discriminerende algoritmes

By Corien Prins • May 18, 2021

/ media
1 in 10 Dutch citizens are likely to ask AI for election advice. This is why they shouldn't

By Ernesto de León • Fabio Votta • Theo Araujo • Claes de Vreese • October 28, 2025

Platform observability and content governance under the DSA

By Charis Papaevangelou • Fabio Votta • September 22, 2025

Imagining the Future of European Digital Infrastructures

By Agustin Ferrari Braun • July 04, 2025

Subscribe to our newsletter and receive the latest research results, blogs and news directly in your mailbox.

Subscribe