top of page
Search

Deepfakes, Bots, and Virtual Candidates: AI Hits the Campaign Trail

Everyone with a functioning keyboard has an opinion on artificial intelligence these days. Some say it’ll take our jobs, others that it’ll seduce us, and a few claim it’ll soon choose our governments. While some folks are having coffee with ChatGPT like it’s their new BFF, others are asking: are we heading toward a political future where campaigns are run by algorithms?

Photo: AI
Photo: AI

The answer is: yes. And no. And also... definitely maybe.


AI is already marching through elections worldwide, waving a digital flag. From deepfake candidates to robo-volunteers and AI-generated attack ads, political campaigns are embracing the tech revolution with the giddy recklessness of a kid on too much soda. Let’s explore just how weird (and real) it’s gotten.


Candidates Who Don’t Sleep, Stumble or Sweat

In India, a politician used deepfake videos to speak fluent dialects he doesn’t actually know. His AI version reached millions, lip-syncing scripts in multiple languages with eerie precision. Meanwhile, in South Korea, a presidential candidate debuted his digital clone "AI Yoon"—a deepfake version of himself that joked, flirted, and dished out meme-worthy one-liners while real Yoon kept things stiff and statesmanlike.


And then there’s Denmark’s Synthetic Party, which let an AI chatbot run for office on a platform built from decades of fringe policy proposals. Not a metaphor. A literal, self-declared robot politician.


These aren’t jokes. Or rather, they were, until they weren’t.


When Deepfakes Go Dark

Of course, not all AI in politics is used for tongue-in-cheek stunts. Some of it is flat-out dirty. In early 2024, New Hampshire voters received a robocall featuring what sounded like President Biden telling them not to vote. It was a deepfake—and yes, it triggered chaos.


Across the aisle, Ron DeSantis’s campaign shared an ad featuring fake images of Donald Trump hugging Dr. Fauci. Spoiler: those photos were AI-generated. Naturally, this kicked off a digital finger-pointing match, complete with accusations of hypocrisy and plenty of Photoshop shade.


The Republican National Committee went full dystopia, producing an entirely AI-generated campaign ad showing a bleak, post-Biden America. It was labelled as synthetic, but if you blinked, you missed the disclaimer.


Even international actors are playing the AI game. Troll farms have used AI bots to impersonate voters and spread disinformation. One particularly ham-fisted deepfake showed Ukrainian President Zelensky surrendering—quickly debunked, but still... yikes.


Bots with Ballot Ambitions

Let’s not forget the humble bot—the unsung hero of spammy political discourse. In 2020, thousands of automated accounts flooded Twitter with support, slander, and spicy conspiracy content. These bots don’t nap, they don’t argue back, and they never ask for campaign merch.


Today’s bots have upgraded. With generative AI, they can craft believable messages in flawless English (or whatever language you need), tailored to each demographic. Imagine a bot army, each one sounding like a slightly overenthusiastic campaign intern. Only smarter. And much, much louder.


Speechwriters, Phone Bankers, and AI Interns

It’s not just shady uses—campaigns are using AI openly to do their homework. Drafting speeches? Let ChatGPT handle that. Need fifty slogan variations? Done. In Pennsylvania, one candidate deployed an AI volunteer named "Ashley" to make calls and chat with voters. Friendly voice, answers on demand, and zero need for coffee breaks.


In India, meme wars were fought using AI-generated graphics. In the U.S., activists generated fake pro-Trump images featuring AI-created Black supporters. Not to be outdone, members of Congress have even delivered AI-generated speeches on the floor.


We now have politicians campaigning with help from chatbots, artists outsourcing visuals to Midjourney, and voters being targeted with algorithmically optimised talking points. Democracy’s new campaign manager might just be a neural network in a blazer.


The Ethics Department Tries Logging In

With AI racing ahead, the question isn’t just "Can we?"—it’s "Should we?" And more urgently, "Who’s responsible when this goes off the rails?"


Some U.S. states now require disclaimers on AI-generated campaign content. Others have banned misleading deepfakes altogether. The EU’s major party groups have pledged not to use deepfakes or synthetic manipulation in the 2024 elections. And companies like OpenAI have outright banned political campaigns from using their tools.


But enforcement is a mess. Loopholes remain, and for every new rule, there’s a creative workaround. One campaign’s innovation is another’s manipulation. And voters? They’re left wondering whether that charming video is a heartfelt message—or just a high-res hallucination.


Authenticity as a Campaign Strategy

Ironically, as synthetic content explodes, some politicians are now marketing themselves as "100% human." Imagine campaign signs boasting: "No bots. No deepfakes. Just awkward small talk and real sweat."


In the age of AI-generated everything, authenticity might be the last novelty. Candidates who flub their lines, sweat on camera, or accidentally insult their own party might be doing it on purpose—just to prove they’re real.


Democracy in the Age of Deepfakes

AI isn’t coming for politics—it’s already here, whispering in every campaign office, generating speeches, and sliding into your social feeds. Used wisely, it can inform, engage, and streamline. Used recklessly, it can distort, deceive, and destroy trust.


So next time you scroll past a too-perfect candidate quote or a suspiciously generic status update, ask yourself: is this a human... or their digital stunt double?


Either way, be careful what (and who) you like.


This article is based on real events, examples, and policy developments from 2020 to 2024 in the United States, Europe, and beyond.

 
 
 

Comments


bottom of page