election

Less than a month before the presidential election, generative AI is creating more headaches and head fakes for voters.

The technology is at the root of several misleading ads, images and videos intended to confound and confuse the electoral process, according to an analysis of 11,000 articles and media mentions from Sept. 6 to Oct. 7 by The Abundance Institute.

“It’s unclear how much of it is politically motivated, but if done well these fake videos and audio calls can sway opinions if they are patently false,” Taylor Barkley, director of public policy for Abundance Institute, said in an interview.

Trust, however, seems to be the first casualty: 53% of more than 5,600 Americans polled by Media Medic believe AI will interfere with major events like presidential elections.

Deepfakes’ rising visibility is the outgrowth of individuals, groups and nation-states pumping them out for fun, influence and profit.

“Americans are being fed a steady diet of disinformation about the electoral process,” Richard Bird, chief security officer at Traceable AI, said in an email. “Active measures to disrupt it — such as frivolous lawsuits, voter intimidation, and efforts to undermine the democratic process — are becoming increasingly common. We’re all sitting on a powder keg of distrust, and it is only a matter of time before it explodes.”

Politicians are actively sowing seeds of doubt and extreme factions are more than willing to water them with aggressive actions aimed at disrupting or destroying the machinery of democracy itself rather than fighting the sows of distrust.

Examples abound, underscoring the influence of nation-state antagonists of the U.S., Elon Musk and his X platform, and — unintentionally — major media organizations.

Among the most egregious incidents:

— A video of Democratic presidential candidate Kamala Harris using a voice-cloning tool to mimic her voice where she mocked President Joe Biden’s “senility” was posted on X and gleefully amplified by Musk on social media.

— A recent breach by Ukrainian hackers targeting Russian court systems and state media underscores how digital warfare has become a potent tool for political disruption.

— China, Russia and Iran were responsible for AI-generated content to sow discord and confusion among American voters, says a report by the Office of the Director of National Intelligence in September.

— After the presidential debate between Kamala Harris and Donald Trump in September, satirical AI-generated memes became viral on X and TikTok. The outright buzz winner was Trump’s false assertions that Haitian residents of Springfield, Ohio, were eating cats and dogs.

— CNN mistakenly broadcast a doctored image of an overly obese Donald Trump with far-right activist Laura Loomer, underscoring how traditional media can inadvertently amplify manipulated content even without sophisticated AI involvement.

— An ad from Republican Sen. Mike Braun of Indiana that digitally altered an image of his opponent ran about 100 times without required disclosure under Indiana law. Braun’s campaign eventually complied with state law and added a disclosure to the video. [Only 11 states have laws regulating how AI can be used in political advertising and content.]

“The genAI models make it more convenient for malicious actors to create misleading content and influence political opinion, and yet they are incredibly difficult to monitor,” Gang Wang, an associate professor of computer science at the University of Illinois’ Grainger Engineering, said in an interview. “Determining the context and intention is very difficult without human involvement,” he added, at a time when companies like X and Meta Platforms Inc. are significantly scaling back moderators.

Automated systems are relatively successful, he said, as they grow to scale. Water marking techniques are promising, but still require development.

“GenAI may not have broken the information environment, but it certainly has complicated matters,” Kroll researchers said in a report this year. “Firms should be prepared both in their strategy and available toolset to react to political developments and meet direct risks head-on.”

Lawmakers, meanwhile, are taking steps to address the problem, albeit in fits and starts.

The Justice Department reportedly is drafting guidelines for law enforcement agencies on the use of GenAI and facial recognition tools. The department plans to issue recommendations for law enforcement agencies on the use of emerging technologies to enhance public safety.

California Democratic Gov. Gavin Newsom has also recently signed three deepfake laws.

Ultimately, the best defense is at the granular technology level, say IT experts.

“Application layer defense coupled with securing the integrity of software components across the supply ecosystem for voting systems and applications can help prevent cyber-attacks that compromise the integrity of elections,” Jimmy Xu, field chief technology officer at Cycode, said in a statement. “This will ensure voter information remains protected, election results are accurate and pristine, and public trust in the electoral process is maintained.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Cloud Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY