voting, elections and election news

Bring out your dead. Deepfake audio and video constructs of deceased Indian politicians may be among the most bizarre uses of AI in elections to date, but it appears that the technology has been used in mostly constructive ways in India, according to news reports. And contrary to many fears about AI’s negative use in what’s dubbed “the first artificial intelligence elections” occurring in 2024, the public seems to be not only aware of AI’s role, but is likely to regard the candidates who employ the technology in a surreptitious manner in a negative light.

In India’s just concluded elections, political parties spent an estimated $50 million on authorized AI content. This spending included a video deepfake of deceased Indian politicians Muthuvel Karunanidhi (wearing his trademark yellow scarf and shades) who died in 2008 and a fake audio of “Mother” Jayalalithaa who passed in 2018 to bolster their messaging. Nevertheless, the use of AI was generally viewed in a positive light as it enabled quick translations of political messaging in a country that has 22 official languages and 780 unofficial ones.

Meanwhile, in Europe, Microsoft boss Brad Smith told Reuters there was little significant use of AI in this month’s European parliament elections. Microsoft points to tools like Content Credentials that addresses online distribution of deepfakes while also tracking content provenance and watermarking used to monitor potential election interference. Critics note that there is no industry-wide watermarking standard and that in many cases, they are easy to remove.

That doesn’t mean no one is trying to influence elections. The EU debunked a Russian-language video on YouTube that pushed false information, for example. Perhaps more insidious are the results of an audit by NewsGuard that alleges that 10 of the leading AI chatbots are one-third of the time regurgitating and thereby validating misinformation gleaned from fake news sites with names like the Boston Times created by Russian agents. The Finnish IT company Check First says Russia is currently engaged in ”Operation Overload,” a misinformation campaign designed to overwhelm newsroom fact checkers.

AI misinformation was the focus of the Tech Accord To Combat Deceptive Use of AI In 2024 Elections signed by 27 tech companies in Munich, Germany earlier this year. The EU itself was galvanized to take action by a number of incidents in 2023 that falsely warned of attacks on polling stations in Spain, for example.

All this may be a preview to the U.S presidential election in November. American AI regulations are less stringent in the absence of national AI legislation and reflect a more reactive, piecemeal approach. The Federal Trade Commission (FTC), for example, banned the use of AI voice clones after an incident in the New Hampshire primary involving the false use of President Biden’s voice in robocalls. During the course of its investigation, the FTC found few of the six AI tools it tested had adequate safeguards in place to protect against political misinformation.

On a national level, AI political misinformation is often outed very quickly, as in the case of Donald Trump’s fake AI images of Black people supporting him and the Republican Party in his presidential bid. Of growing concern, however, is the use of AI to affect “down-ballot” elections in which a disparaged candidate doesn’t have the resources to counter misinformation and the decline of local press outlets translates into an inability to separate fact from fiction in a timely manner in more local elections.

Only about a third of U.S. states have AI laws on the books to prevent AI political misinformation, according to Public Citizen, a nonprofit consumer advocacy organization. This lack of AI regulation allows for situations to arise like that in Georgia wherein the website of Glenn Cook, a Republican candidate for the state legislature, features images of “community members” smiling and chatting, but which are actually AI images of people who don’t exist, reports the Associated Press.

Political candidates that use AI badly may experience unexpected blowback on election day. A Reuters survey found that 59% of respondents were concerned about fake news online which suggests a negative outcome for purveyors of AI political misinformation. And many voters may be taking their own steps to limit the influence of AI—34% said they now track individual journalists as primary and trusted news sources.