scams, ballot, voting, elections and election news

It’s been a headline year for democracy, with over two billion voters casting their ballots in dozens of countries worldwide. But democracy has also never been under greater threat—and in many cases, AI-powered trickery is to blame. The US elections on November 5 will be the most significant in a year of polls that have redrawn the geopolitical map. That’s why governments and enterprises alike are working at full speed to stem the tide of AI-powered disinformation designed to mislead voters.

But for all the concern over state-backed influence operations and deepfake content, we must not forget that financially motivated scammers will also be eyeing election season as a major opportunity. As generative AI (GenAI) tools continue to lower the bar for creating and distributing online scams, individuals need to be vigilant about the key threats might look like, and how people should respond.

Voters Beware

US voters are used to stories of online influence campaigns designed to sow discord and undermine democracy—although whether they’re able to spot such attempts is another matter. These efforts tend to peak just before major elections, when the online populace is more receptive to political messages.

GenAI-powered deepfake audio and video, which is increasingly hard to distinguish from the real thing, lends an extra element of credibility to fictitious stories—especially when posted on lookalike fake news sites and amplified on social media. Already this year we’ve seen deepfake images of Donald Trump being arrested, and deepfake audio of UK prime minister Keir Starmer supposedly ranting at staffers. In Slovakia, a pro-Russian candidate was even elected to lead the country after their opponent’s reputation was tarnished by deepfake audio released two days before the election.

So far, so normal. Recent research shows such efforts are gaining ground and will increasingly demand that voters are more skeptical of the political narratives they see online. However, these same individuals must also extend this vigilance to scam campaigns that seek to exploit election news for monetary gain.

Rather than using deepfakes in highly targeted attacks to scam victims, such as BEC or “virtual kidnapping,” threat actors can design broad scams that leverage deepfakes of public figures. Followers and fans of these influential figures are more likely to fall for what the fraudsters are saying if they see it in a fake video featuring Donald Trump or Elon Musk, for example. The fact that such figures often do promote their own business schemes online makes these fakes more believable still.

Lowering the Bar

In this regard, GenAI has been a game changer in helping to lower the barriers to entry for budding cybercriminals. Threat actors can choose from tens of thousands of open source large language models (LLMs) on HuggingFace to help design and run their campaigns. Or they could purchase subscriptions to criminal LLM services which jailbreak legitimate tools like ChatGPT, enabling them to be used for nefarious purposes.

There are several uses to which these models could be put. The first is identifying and baiting victims. LLMs are great at sifting through vast quantities of content—such as social media accounts—to profile the political beliefs of the account holders. These could be collated to provide a useful selection of individuals to target particular election-themed scams at. LLMs can also be used to create politically motivated interest groups on social media, which would then pump out polarizing posts to attract potential victims.

Second, LLMs are superb at creating content—and lots of it—with very little effort needed from the scammer. In this way, they can populate fake websites purporting to be run by political figures—perhaps by even including some legitimate content from official campaign sites. And they can help with search engine optimization (SEO) to ensure these sites appear at the top of search results.

Third, once those distinct user groups have been identified, LLMs could potentially be deployed to build highly personalized phishing messages targeting them at scale. A tool like ChatGPT, for example, would easily be able to combine voter interests, threat actor messaging and elements from the victim’s social media account in a highly accurate, targeted text or deepfake video script. It would also find it no trouble at all to do the same in foreign languages like Spanish, with the skill of a native speaker.

With all of these above examples, generative AI is doing the hard work, so the threat actor doesn’t have to—enabling the building and deployment of malicious campaigns at tremendous speed and scale. That kind of firepower not only draws in criminal groups with little or no technical know-how, it also enables them to experiment multiple times to see what works—honing their messages and tactics all the while.

How Deepfakes are Helping

Deepfakes are an important part of this conversation, as they can be generated by generative AI or standalone tools to powerful effect, without the need for technical knowledge or computational power. Deepfake 3D Pro is one such example. Available for a lifetime subscription of $160, it will create videos featuring a 3D avatar that can impersonate anyone, and follow recorded audio which could be generated using off-the-shelf voice cloning services. The latter can be used to impersonate a candidate with only a few seconds of their original voice.

Other options include Deepfake AI, available on underground sites for $400, which allows criminals to place a candidate’s face on top of an actor’s video to spew falsehoods and promote scams like fake fundraisers. There are even real-time face replacement tools like FaceSwap and AI VideoCallSpoofer which enable criminals to conduct scams “live.”

Tech Can Help Us

The good news is that it’s easier to help voters protect themselves from scams than it is to change their mind about who to cast their ballot for. And as we get more familiar with stories of deepfakes, voters may in time adopt a more “zero trust”-like approach, based on the principle of trusting nothing they see or read until it can be verified.

There are also initiatives that can help these efforts, such as watermarks for synthetic images that would enable skeptical users to easily check who created an image, what changes have been made and by whom. Tech giants Meta, Google, OpenAI and Amazon are all backing the C2PA standard. Tech companies are also releasing free applications designed to detect deepfake video calls and flag false conversations in real time.

Politicians could also do their part by not using social media to promote investments or ask for fundraising money. Regardless, there is still plenty more work to do to stop political deepfake scams. Ironically, in an era of growing political cynicism, most voters’ default setting is to trust the messages their candidates spread. It’s time to look more deeply into where some of these messages really come from.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY