impersonation, attacks, deepfake

Artificial intelligence (AI), machine learning (ML) and natural language processing (NLP) have become increasingly intertwined within everyday vernacular. We’ve heard it a million times; generative AI is turbocharging the creation of content at a rapid pace and, hence the topic of this article, with worrying believability. Recently, as more people gear up for another presidential election, there is growing concern about generative AI’s potential misuse and unsupervised proliferation. Spreading misinformation and manipulating public opinion with false stories and fabricated statistics are just some of the worries that experts fear will drastically sway the campaigning and outcome of such a high-stakes political event.

AI-generated articles have exploded in recent years, with statistics pointing out that misinformation is proving all the more likely. AI is making it easy for anybody – regardless of technical proficiency – to create websites, articles and visual content that spout misinformed opinions and perpetuate dangerous narratives. Disenfranchised creatives pouring years of experience and hard work creating written or visual content from scratch are rightfully crying for greater stability and protection.

However, it’s increasingly hard to differentiate genuine, well-researched content from fake news, which can – and has historically been – a central theme throughout recent presidential elections. Millions of citizens cannot tell the difference between an image or photograph taken on a high-quality DSLR and that which has been generated by an AI tool under instruction from its user.

Multiple stakeholders, from government policymakers to technology organizations and federal institutions, face crucial questions on how to promote responsible AI while protecting democratic discourse.

The Risks of AI-Enabled Misinformation

AI can rapidly generate and disseminate content that appears authentic and credible but may be partially or fully fabricated. AI programs like ChatGPT have been evidenced to dispense ‘hallucinations’ as a way to satisfy the prompt of the user without any innate ability to detect whether the narrative or subject(s) is falsified or not.

For example, if a user requests that generative AI tools create a short biography about ‘Person X’ – but unbeknownst to the AI chatbot, this person doesn’t exist – the chatbot will try and create content that satisfies the preliminary user request, not intuitively alert the user to suggest the person is made up.

As such, this type of fake content could be dispensed onto the internet for the world to see, thus potentially earning the outlet higher ‘credibility’ from impressionable readers, particularly if the content’s narrative is skewed in a specific socio-political direction. As humans don’t have a built-in detection tool that identifies content at first glance as either misinformed or genuine, it’s increasingly difficult to separate fact from fiction.

Simultaneously, this has led to the propagation of deepfakes and synthetic visual media. AI programs can produce fake videos or audio files of people who exist, espousing misinformation or prejudiced views that unfairly tarnish their reputations or provoke an unwarranted backlash from the political opposition. The rise of this generative AI has led to the widespread production of content that seems human-made, with most viewers being unable to differentiate the quality of work that would take humans hours and AI algorithms mere minutes to conjure up.

The real worry is that AI tools can just conjure up images, text or videos ad infinitum. Once this fabricated content falls in the hands of perpetrators, without being too sensationalist, social media feeds can be flooded with artificially crafted content that promotes trending conspiracy theories or sensationalist ‘clickbait’ content, created for the sole purpose of encouraging further political division.

An obvious risk in the 2024 election year is the challenge to safeguard campaign integrity and poll results. The question is not simply whether deepfakes and misinformation will be purposefully deployed – we know that they will – but, rather, how effectively lawmakers, researchers, the media and technology leaders will respond to them.

Safeguarding Democracy With Responsible AI Practices

Misinformation in the political sphere hasn’t been exactly new; fake images, memes, videos and audio ‘recordings’ have been around for a long time. Recent events have seen more concerted efforts to impersonate candidates pre-election to create confusion and erode voter confidence.

Luckily, social media platforms tend to have built-in fact-checking features that can add contextual information and verify whether any shared content is fake or not. However, it can take time for these verification tools to take effect, by which point, misinformed content could have been in the public eye for several hours and swayed numerous voters.

Developing AI authentication techniques to reliably and efficiently detect fake content and synthetic media – when shared online or on broadcast media – will be a solid first step. News organizations and social media companies can therefore have access to more third-party tools that can decisively label content as manipulative, and remove it henceforth.

Unambiguous AI regulation will also help to bridge transparency and verifiability issues when AI-generated media enters political advertising campaigns. Deepfake detection and identity verification and their impact on misleading political voters can be incredibly difficult to quantify, but more specific regulations will at least provide more clarity on how they are dispensed.

Educational programs and public awareness campaigns on media literacy and critical thinking will help more users identify AI-generated misinformation. As AI content and media enter the public domain, many voters will continue to demand democratic accountability as their proliferation can deprive the public of important facts and statistics needed to make informed decisions come voting day. This also probes further questions about the “liar’s dividend” – in how the very existence of generative AI perpetuates mistrust.

As AI-generated content grows in volume and speed, and while people find it more challenging to distinguish between real and fake, this further erodes voter trust.

Ethics review boards with diverse viewpoints and stakeholders from tech companies and AI research hubs should be established to drill down on the potential long-term harm this content can cause. Fact-checking and misinformation-detection tools will play critical roles in mitigating the impact of deepfakes and synthetic media in upcoming elections.

A Shared Civic Responsibility

To effectively mitigate the impact of misinformation, inclusive industry-wide dialogue to find solutions will prove critical. The global nature of AI’s threat poses legislative, commercial and individual arguments for its regulation and tightening, and it remains the remit of tech leaders to leverage AI for ethical purposes to prevent dangerous AI from spiraling even more out of control.

Protecting democratic processes ultimately depends on a combination of technical safeguards, continued education, transparency and citizens’ commitment to information fact-checking and quality while preventing dangerous rhetoric from spreading further afield.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY