election

When ChatGPT maker OpenAI earlier this week laid out steps it is taking to push back against the misuse of its generative AI tools during the upcoming consequential 2024 voting season, it became only the latest organization to warn about disinformation and other threats from AI in disrupting elections.

Federal and state agencies, lawmakers, researchers, think tanks, cybersecurity vendors and others for more than a year have been sounding alarms about how threats like nation-states and cybercriminals can use generative AI to create deepfakes and voice clones to impersonate candidates, spread disinformation about candidates, issues, and election data and spark violence.

Recently, the World Economic Forum, in its Global Risks Report 2024 this month, put the role of AI in elections at the top of its list, ahead of such top-of-mind issues as climate change, war and the global economy. In an interview, Carolina Klint, chief commercial officer for Europe at consultancy Marsh McLennan, which co-produced the report, told CNBC that, “AI can build out models for influencing large populations of voters in a way that we haven’t seen before. How that is going to play out is going to be quite important for us to watch carefully.”

Meanwhile, the U.S. Department of Homeland Security (DHS), in its own 2024 risk report, wrote that the United States’ “electoral processes remain an attractive target for many adversaries, and we expect many of them will seek to influence or interfere with the 2024 election.”

That includes leveraging AI, adding that the proliferation and accessibility of tools “probably will help these actors bolster their malign information campaigns by enabling the creation of low-cost, synthetic text-, image-, and audio-based content with higher quality.” The agency called out adversaries like China, Russia and Iran as particular threats to elections in the United States.

And the threats don’t stop in the United States. More than 50 countries will have elections this year, with some 2 billion people expected to cast ballots.

Expanded Policies

In a blog post this week, OpenAI executives wrote that their approach to keep their tools from interfering with elections “is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” they wrote, adding the effort includes bringing together the experts OpenAI has in such areas as safety systems, threat intelligence, legal, engineering and policy.

OpenAI early year this will implement digital credentials from the Coalition for Content Provenance and Authenticity – which includes a number of AI companies – to encode details about images created by Dall-E 3, its latest text-to-image tool.

They also are evaluating a new provenance classifier solution for detecting images generated by Dall-E and will soon make it available to its first group of testers, which will include researchers, journalists and platform providers.

OpenAI is increasingly integrating the ChatGPT chatbot with existing sources of information, such as real-time news that includes attribution and links.

“Transparency around the origin of information and balance in news sources can help voters better assess information and decide for themselves what they can trust,” they wrote.

In addition, in an effort with the National Association of Secretaries of State, ChatGPT will direct users to CanIVote.org for voting information in the United States when they ask procedural election-related questions, such as where to vote.

More steps are coming and OpenAI noted some of its usage policies for ChatGPT and the vendor’s API, including prohibiting builders from creating chatbots that pretend to be real people – like candidates – or institutions or applications aimed at making voting more difficult, including misrepresenting voting processes and qualifications or look to discourage voters, such as claiming that voting is meaningless.

The company also doesn’t let people create applications for campaigning or lobbying. Such policies are important now that OpenAI lets developers create custom versions of ChatGPT and put them into the recently launched online GTP Store.

A Call to Arms

The new OpenAI policies join a growing tide of concerns about AI and elections and efforts to mitigate the problems, which are myriad. Along with deepfakes and disinformation, generative AI also supercharges tailored spearphishing attacks, malware that is more difficult to detect, distributed denial-of-service (DDoS) attacks, and online harassment.

The technology won’t introduce new risks – threat actors have long targeted U.S. elections – but it will “intensify existing risks,” CISA Director Jen Easterly wrote in a column this month in Foreign Affairs with Cait Conley, senior election security adviser at CISA, and Kansas Secretary of State Scott Schwab. “Generative AI in the hands of adversaries could threaten each part of the electoral process, including the registration of voters, the casting of votes, and the reporting of results,” Easterly wrote.

Much of the pressure will fall on state and local officials, who – after the Republican attacks on the election process in 2020 – will need more help from such corners as federal agencies, generative AI companies, voting machine manufacturers, the media and voters, they wrote.

“Generative AI companies in particular can help by developing and making available tools for identifying AI-generated content, and by ensuring that their capabilities are designed, developed, and deployed with security as the top priority to prevent them from being misused by nefarious actors,” they wrote. “At stake is nothing less than the foundation of American democracy.”

Governments at the state, federal and international levels also are making some moves. The National Conference of State Legislatures last month outlined some state-level existing and proposed laws that could be applied to AI in elections and campaigns.

US Trailing Behind

In addition, Padraic O’Reilly, chief innovation officer at cyber-risk firm, CyberSaint, told Techstrong.ai that the federal government “has started the process, but it is lagging.”

O’Reilly, whose company is working with campaigns, local governments and international governments to measure the risk of AI in the election process, noted that in October 2023, the Biden Administration released an Executive Order for safe, secure and trustworthy AI and are pushing an AI Bill of Rights. The same month, U.S. Senators Brian Schatz (D-HI) and John Kennedy (R-LA) introduced a bill for more transparency around AI-generated content, including labels to let people know when they’re looking at AI content.

He also said the G7’s Hiroshima Process – which calls for principles and a code of conduct for developing AI systems – is promising, and the UK Safety Summit in November 2023 included a joint commitment between 28 governments and AI companies to create a form of testing AI tools before releasing them.

“China has also arguably produced the most guidance, addressing a wide array of concerns from workers’ rights to data quality in training and output,” O’Reilly wrote. “The jury is still out, as China has a much more security state emphasis on the building of models, whereas the states lean more toward the consumer side at the moment.”

That said, it’s unlikely any of these efforts will have a significant impact on this year’s election cycle because bad actors will probably find workarounds to the initial provisions, he said, adding that the elections this year will be “interesting and perhaps a bit alarming, too, as we see new variants of disinformation leveraging deep fake technology.”

“There are already precedents for this, and the big fear is that microtargeting and AI-generated content will run wild on social media platforms,” O’Reilly said. “The combination of misused targeting data and AI-generated content is the perfect storm scenario. The combination of social media and deepfake tech could prove to be a very substantial challenge to the integrity of the upcoming elections, both here and abroad. And there are not enough guardrails in place on AI output to protect against its misuse by bad actors.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Networking Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY