Google,

Worries about the threats that generative AI poses to elections – from spreading disinformation to deepfakes to voice clones – have been around since OpenAI introduced ChatGPT to the world almost 16 months ago and kicked off the land rush around the emerging technology.

Governments and vendors alike have been taking steps in hopes of stemming generative AI’s impact as the world gets deeper into a busy electoral 2024 that will touch as many as 4 billion people in 40 countries, including the United States.

Google this week said that plans it announced in December for the highly charged U.S. elections will be implemented in India, where general elections are scheduled for April and May.

A key part is narrowing the questions its Gemini generative AI-powered chatbot will answer about elections.

“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” the IT giant wrote in a blog post. “We take our responsibility for providing high-quality information for these types of queries seriously, and are continuously working to improve our protections.”

That echoes what Google laid out in December when talking about steps it is taking for the U.S. elections, including restricting election-related queries made to Bard – now known as Gemini – and Search Generative Experience (GSE), generative AI capabilities introduced to Google Search a year ago.

Generative AI Ups the Challenges

At the time, Susan Jasper, Google’s vice president of trust and safety solutions, wrote that for more than a decade, the company has used machine learning classifiers and AI to detect and remove content that violated policies against using Google platforms to spread disinformation, incite violence, manipulate media and other practices that could hinder the democratic process.

“And now, with the recent advances in our Large Language Models (LLMs), we’re experimenting with building faster and more adaptable enforcement systems,” Jasper wrote. “Early results indicate that this will enable us to remain nimble and take action even more quickly when new threats emerge.”

This week’s announcement comes weeks after Google temporarily pulled its Gemini image generator following criticism over historical depictions and other problems.

A Range of Safeguards

In talking about the elections in India, Google also outlined other steps it’s taking regarding AI-generated content, including enforcing disclosure requirements instituted last year for election ads that contain synthetic content.

“As more advertisers leverage the power and opportunity of AI, we want to make sure we continue to provide people with greater transparency and the information they need to make informed decisions,” Google wrote. “Our ads policies already prohibit the use of manipulated media to mislead people, like deepfakes or doctored content.”

The company also is putting labels on content created using YouTube generative AI features, such as Dream Screen, requiring creators to disclose when they have created realistic altered or synthetic content and labeling the content, providing additional context through such tools as “about this image” in Search and double-check in Gemini to see if there is web content to substantiate the chatbot’s responses, and digital watermarking.

In addition, Google recently joined the C2PA coalition, which includes other vendors like Microsoft, Intel and Adobe and is developing open technical standards for certifying the source and history of media content. It also signed onto the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, which was rolled out last month at the Munich Security Conference. Other vendors supporting the agreement include IBM, Amazon, Anthropic, Meta, OpenAI and Microsoft.

A Worrying Threat

The threat AI poses has become a talking point over the past year. The World Economic Forum in January put the role of AI in elections at the top of the list in its Global Risks Report 2024, ahead of such problems as war and climate change. The U.S. Department of Homeland Security in its 2024 risk report wrote that the U.S. “electoral processes remain an attractive target for many adversaries, and we expect many of them will seek to influence or interfere with the 2024 election.” This includes using AI to spread disinformation.

The use of generative AI to disrupt voting arose in January, when robocalls in the AI-created voice of President Biden were sent to Democratic voters in the days ahead of the New Hampshire primaries urging them not to vote. The incident – which was traced back to two Texas companies, Lingo Telecom and Life Corp. – prompted the Federal Communications Commission to make such robocalls using AI-generated voices illegal.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY