How much did AI really impact the election?

Ben Colman, co-founder and CEO of Reality Defender; Sabrina Palme, CEO of Palqee Technologies Ltd; Kalev Leetaru, founder of the GDELT Project; and moderator Ed Fraser from Channel 4 News speak during a panel at the Web Summit in Lisbon, Portugal.

Ben Colman, co-founder and CEO of Reality Defender; Sabrina Palme, CEO of Palqee Technologies Ltd; Kalev Leetaru, founder of the GDELT Project; and moderator Ed Fraser from Channel 4 News speak during a panel at the Web Summit in Lisbon, Portugal. Camille Tuutti

AI tools can produce realistic deepfakes and synthetic audio that influence public opinion.

A fake Joe Biden robocall during the primaries may have tricked thousands of New Hampshire voters, raising questions about how artificial intelligence is reshaping democracy. At last week’s Web Summit in Lisbon, Portugal, experts warned that AI-driven misinformation threatens public trust and called for action to counter the risks.

The Nov. 14 "Democracy Manifest" panel featured Ben Colman, co-founder and CEO of Reality Defender; Kalev Leetaru, founder of the GDELT Project; and Sabrina Palme, CEO of Palqee Technologies Ltd. Moderated by Ed Fraser of Channel 4 News, the discussion explored AI’s role in misinformation, regulatory challenges and media dynamics.

Fraser opened by asking whether AI had a measurable impact on the recent U.S. election. Earlier this year, robocalls using an AI-generated version of President Joe Biden's voice urged New Hampshire voters to skip the state's Democratic primary. On Sept. 26, the Federal Communications Commission proposed a $6 million fine against political consultant Steven Kramer for masterminding the ruse.

Colman explained how easily AI tools can produce realistic deepfakes and synthetic audio that influence public opinion. To illustrate, he played an audio clip mimicking Biden’s voice, noting how “something that was made in seconds in a taxi ride sounds just like Joe Biden.”

“So unlike a computer virus, any of you guys can create a perfect deepfake either for entertainment or to sway an election, or even worse,” he told the audience. 

Echoing Colman’s concerns, Palme noted that AI not only makes misinformation harder to detect but also accelerates its spread. She cited Ipsos data linking Americans’ media consumption to misinformation about crime and inflation. This, she said, shows how AI can amplify false narratives.

“Am I really getting the information from the president, the celebrity or a media individual person, or is it actually a deepfake?” she said.

Palme shared a recent example of how easily the public can be fooled. During Halloween, a fake AI news website spread a story about a Halloween parade in Dublin. The story went viral on social media, drawing hundreds of people to the streets for a parade that didn’t exist.

“Many people laughed about this, but that just shows how easy it is to manipulate in the case of AI and especially spreading information on social media,” she said. 

Leetaru, who examines the news and big data patterns, said large-scale deepfake use wasn’t evident in the election, but real images and videos taken out of context did mislead voters. He compared this tactic to meme culture, adding that it’s even harder to debunk. 

“​​I think as these tools get more and more built into my phone, I pull it out, I type in a fake image of Biden falling, click OK and post — we're getting there now,” he said. 

The challenges of regulation

Palme highlighted the difficulty of regulating AI, comparing the industry-led approach in the U.S. to Europe’s stricter frameworks, such as the Digital Services Act. While she supports regulation, she cautioned that Europe’s model might hamstring innovation.

“It's first and foremost going to impact  . . . European companies wanting to innovate in the AI space, whereas in the U.S., businesses can start an AI company, they can grow, they can scale, get the credibility and then look into regulatory compliance,” she said.

Emerging U.S. state laws, such as Colorado’s Artificial Intelligence Act, could indicate a move toward state-level governance similar to data protection laws, Palme noted.

Colman argued that tools to detect AI-generated content already exist but aren’t widely adopted by tech platforms.

“We just need our government to force technology platforms to protect the average consumer who cannot tell the difference,” he said. 

President-elect Donald Trump has expressed skepticism toward strict AI regulations. What can be expected from his administration on this front? 

“I would say probably that Trump was quite open about his vision in terms of AI regulation, and I think probably we were not going to see a lot or not a strong push for implementing regulations or frameworks to implement trustworthy AI,” Palme said. “I think it's more going to be on the industry-is-going-to-regulate-itself basis.”

Media and democracy

The panel concluded with a discussion on how platforms can combat AI misinformation while safeguarding democracy. Historically, news outlets focused on presenting objective facts for public interpretation; now, the media is moving back to a “party paper” model, where reporting often reflects partisan viewpoints, Leetaru said.

He questioned whether debunking false information by outlets like The New York Times matters when much of the audience turns to influencers or podcasts for news.

“If you have very strong influencer personalities, do they even need to play a fake video of Biden saying something? They can just say, ‘Hey, Biden did this today.’ That's really all that matters,” he said.

The Web Summit drew over 71,500 attendees from around the world, according to a LinkedIn post by organizers, with insights from industry leaders, policymakers and innovators on myriad tech topics.