AI Hallucinations in Political Contexts:Emerging Challenges to Democratic Trust

Domagoj Bebić, Jelena Đuraš Gleđ, Nikša Sviličić

Abstract


This paper examines AI-generated hallucinations not only as technical unintended outputs of generative systems but as inherent outcomes of how these models construct meaning and fill informational gaps. In the political sphere, where credibility, accuracy and public trust carry particular weight, such outputs increase the risk of misinformation, heighten polarisation and contribute to growing uncertainty about what is real. Using a qualitative approach that combines a structured review of extant literature with a multi-case analysis of synthetic political content, including short-form deepfake videos and artificial audio disseminated during election periods, the study traces how hallucinations enter and circulate within political communication. The findings show that algorithmic attention systems tend to elevate emotionally charged and personalised material, allowing synthetic content to appear alongside or even above authentic political messages. This blending of sources makes it more difficult for citizens to recognise what is genuine, particularly in fastpaced digital environments. The paper argues that these risks call for a layered response. Regulatory safeguards, clearer provenance and labelling mechanisms, and sustained investment in digital literacy can help limit the democratic harms associated with synthetic media. At the same time, such measures can create space for constructive and transparent uses of AI in political and electoral communication, ensuring that technological innovation does not undermine the foundations of democratic trust.

DOI: 10.5671/ca.49.1.4


Keywords*


artificial intelligence; AI hallucinations; political communication; deepfakes; electoral process; democracy

Full Text:

PDF


Creative Commons License This work is licensed under a Creative Commons Attribution NonCommercial International License 4.0.