
The Dutch Data Protection Authority (Autoriteit Persoonsgegevens) has issued a stark warning to voters in the Netherlands, urging them not to rely on artificial intelligence tools such as ChatGPT and other chatbots for political or voting guidance. According to the watchdog, these AI models produce biased, unreliable, and polarised advice that could distort democratic decision-making ahead of the Dutch parliamentary elections on October 29, 2025.
AI Models Show Political Bias, Watchdog Says
In a study released Tuesday, the regulator tested four of the world’s most popular large language models — ChatGPT, Gemini (by Google), Mistral, and Grok (by X) — to assess their responses when asked to match fictional voters with political parties based on policy preferences.
The research involved 1,500 simulated voter profiles, each reflecting a range of positions on issues such as immigration, climate policy, taxation, and social welfare. The findings revealed a clear pattern of political bias:
- In over 50% of test cases, AI tools recommended fringe parties — most notably the hard-right Party for Freedom (PVV) led by Geert Wilders, and the left-wing Green Left–Labour alliance.
- Mainstream centrist parties, including the People’s Party for Freedom and Democracy (VVD) and Democrats 66 (D66), were recommended far less frequently.
- Certain long-standing parties, such as the Christian Democratic Appeal (CDA) and Denk, were “almost never suggested”, raising concerns about representational fairness in AI-generated guidance.
The report concluded that these discrepancies could skew voter perceptions, especially among those unfamiliar with Dutch politics or reliant on AI for quick summaries.
“A Threat to Democratic Integrity”
Monique Verdier, the deputy chair of the Dutch Data Protection Authority, said the findings highlight serious risks to electoral transparency.
“This directly impacts a cornerstone of democracy: the integrity of free and fair elections,” Verdier stated. “We urge voters not to use AI chatbots for voting advice, because their operation is neither transparent nor verifiable.”
She emphasized that AI models are trained on vast, unverified online data, which may amplify biases, misinformation, or extreme political content. As a result, voters could be inadvertently nudged toward parties that do not reflect their values or priorities.
Verdier also called on AI companies to take responsibility, saying providers must prevent their systems from being used as voting recommendation tools, especially in sensitive election periods.
Growing AI Influence on Political Decisions
The warning comes amid a global surge in AI usage across nearly every sector — from education and entertainment to policymaking. Political scientists have raised concerns that AI-generated information could shape electoral outcomes, especially as voters turn to chatbots for quick, personalised answers.
In Europe, countries such as Germany, France, and the United Kingdom have already begun discussing regulations to curb AI influence in politics, including restrictions on election-related chatbot interactions and stricter transparency rules for political content generated by AI systems.
The Dutch watchdog’s findings make the Netherlands one of the first EU countries to conduct formal tests assessing AI bias in the electoral context.
Geert Wilders and the Party for Freedom (PVV)
The report’s mention of the Party for Freedom (PVV) is particularly significant given its dominant position in current Dutch politics.
Led by Geert Wilders, a hard-right populist known for his anti-immigration stance, the PVV triggered the collapse of the previous government earlier this year after coalition partners rejected Wilders’s 10-point immigration reform plan.
The PVV — which made history in 2023 by winning the most parliamentary seats — remains ahead in national polls, though analysts predict it will once again fall short of an outright majority.
Despite its popularity, mainstream Dutch parties have ruled out forming a coalition with Wilders, maintaining a political cordon around the PVV.
Election Integrity and AI Accountability
With the October 29 elections fast approaching, the Dutch Data Protection Authority’s warning underscores growing international concern about AI’s role in shaping democratic processes.
Experts caution that, without oversight, AI-driven misinformation or algorithmic bias could erode public trust in elections, amplify extremism, and undermine pluralistic debate.
AI companies, meanwhile, have defended their models, claiming that they strive to ensure neutrality and transparency. However, the Dutch watchdog stressed that “neutrality cannot be guaranteed” given the opaque nature of training data and model architecture.
“The operation of these systems is a black box,” Verdier explained. “We cannot verify how the AI arrives at its conclusions, and that lack of transparency poses a real threat to democratic integrity.”
A Call for Voter Awareness
The authority concluded its report with a simple but firm recommendation:
“Voters should seek information from trusted, verifiable sources — not from AI chatbots.”
It urged citizens to rely on official government resources, established media outlets, and reputable voting guides to make informed decisions.
The Netherlands, which has maintained coalition-style governance since the 1940s, faces an increasingly fragmented political landscape. As AI tools become more prevalent in everyday decision-making, digital literacy and critical thinking are now viewed as essential safeguards for democracy.


Leave a Reply