Assoc. Prof. Dr Luka Martin Tomažič
Vice-Rector for Research, Alma Mater Europaea University, Slovenia
Vice-President of the Slovenian Academy of Legal Science
Lead Researcher at Global Peace Offensive Center of the World Academy of Art and Science
Given the prominence of the digital platforms in contemporary democratic societies, hundreds of millions rely on them for information during the pre-election processes as well. In this regard, Artificial Intelligence can importantly shape the public discourse and its ability to generate persuasive content heightens risks to democratic integrity.
In democratic societies, AI is a double-edged sword. On one hand is the potential for election manipulation through disinformation and cognitive warfare, on the other is the danger of suppression of legitimate expression, fostering a chilling effect on freedom of speech.
The potential effective use of generative AI in election interference has moved from hype to reality in 2025. While it is not a standalone disruptor it may serve as an important accelerator of tried election interference techniques and approaches. Thus, AI may enhance election interference and erode voter trust specifically through the use of deep fakes and personalized persuasion techniques and AI-driven chatbots can be highly effective at changing opinions of a population.
There have been notable cases already occurring, though with debatable success. These include deepfakes impersonating Prime Minister Mark Carney in Canada, manipulated videos contributing to annulment of presidential results in Romania and AfD’s use of nostalgic AI content for popularity boosts in Germany. Actors such as Russia and China, leveraged AI for smear campaigns, fake endorsements and amplification of content via bot networks across Moldova, European Union and beyond.
Recent studies, such as those published in Nature and Science, demonstrate that AI generated content can be up to four times more effective at shifting voter preferences than traditional approaches. While only one in ten or twenty participants in studies changed support after interacting with AI-driven chatbots, such success rates may flip results in close-outcome electoral contests. While positive uses, such as voter outreach and fact-checking exist, risk tend to dominate, including but not limited to eroding trust in quality media and spreading disinformation.
The risks notwithstanding, free expression and public discourse remain cornerstones of fair elections and exchange of opinions, even contested ones in a democratic society. Potential abuse of AI in connection with elections thus poses a significant technological, societal and normative challenge. Government overreach with censorship as a response to legitimate concerns regarding AI-driven election interference might skew results in other, less direct, but no less malign ways.
While balancing safety and speech clearly requires transparency, human oversight, and bias audits, it is by no means clear how to achieve desirable societal outcomes while limiting externalities from malign use of the new technology in connection with democratic processes.
We will need to learn as we go along and respond appropriately to the challenges, including disinformation and cognitive warfare, while preventing censorship that undermines free expression and all the benefits it entails. In the words of democracy's defenders, it can be claimed that AI should empower truth and discourse, not distort or silence it. Only through vigilant, balanced governance, can we ensure technology serves, rather than subverts, free societies.
Luka Martin Tomažič has a double habilitation in law and philosophy from Alma Mater Europaea University, Slovenia. He has authored more than a hundred publications in energy law, legal philosophy, and constitutional law. He is a lead researcher at the Global Peace Offensive Center of the World Academy of Art and Science and Vice-President of the Slovenian Academy of Legal Science. In 2023 and 2024, IusInfo named him among the ten most influential Slovenian lawyers.
