The Federal Communications Commission (FCC) is considering a rule that would require political campaigns to disclose whether they used AI-generated content in television or radio ads.
The FCC isn’t looking to ban the use of AI in political advertisements, but given how easy it’s become to create image and voice deepfakes since the explosive growth of generative AI since late 2022, the commission wants voters to know when what they’re seeing or hearing was created with AI technologies.
“As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” FCC Chairwoman Jessica Rosenworcel in a statement. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”
Elections and AI
The proposed rule is the latest step by a government agency to put policies in place to reduce the negative affect that AI could have during a high-profile and highly volatile election season not only in the United States but in other parts of the world, such as India. The World Economic Forum in January put AI-generated disinformation and misinformation at the top global threat over the next two years, writing that the threat “has risen rapidly in rankings to first place for the two-year time frame, and the risk is likely to become more acute as elections in several economies take place this year.”
In its own 2024 risk report, the U.S. Department of Homeland Security wrote that “electoral processes [in the United States] remain an attractive target for many adversaries, and we expect many of them will seek to influence or interfere with the 2024 election.” Using AI to spread disinformation falls into this category.
IT vendors, including AI companies like Google and OpenAI, also said they are taking steps to stem the influence their products will have on elections this year.
Peter Loge, director of the School of Media and Public Affairs at George Washington University, said in a video last month that political campaigns are using AI for a growing range of tasks, from predicting where voters are and what they care about to writing fundraising emails, generating first drafts of scripts and speeches to “at least in one case, making up news stories that aren’t true and putting them on a campaign website.”
“There [are] a lot of ethical concerns with AI in campaigns,” Loge said. “The basic rule of thumb is, there aren’t AI ethics that are different from everybody else’s ethics. You have a set of ethics. In a campaign, you should aim to persuade and inform, not deceive and divide.”
The Case in New Hampshire
The worry became real in January when as many as 25,000 voters in New Hampshire received robocalls that used what sounded like President Biden’s voice to encourage them not to vote in that state’s presidential primary and to save their vote for the November election. It turned out to be a scheme that used AI-based voice cloning to impersonate Biden and led to the FCC banning the use of AI-generated voices in robocalls.
Investigators traced the scheme back to Lingo Telecom, a carrier that runs the phone lines, and Steve Kramer, a Democratic political consultant in New Orleans who worked for Dean Phillips, Biden’s rival for the Democratic nomination who ended his campaign in March.
The N.H. Attorney General’s Office this week indicted Kramer on 13 felony counts of voter suppression and a misdemeanor count of impersonating a candidate. For its part, the FCC is proposing a $6 million fine against Kramer and another $2 million fine against Lingo Telecom.
In a statement, Rosenworcel called the case in New Hampshire “unnerving. Because when a caller sounds like a politician you know, a celebrity you like, or a family member who is familiar, any one of us could be tricked into believing something that is not true with calls using AI technology. It is exactly how the bad actors behind these junk calls with manipulated voices want you to react.”
With the proposed FCC AI disclosure rule, if other commissioners agree with the idea, the agency will seek comment on whether to require both on-air and written disclosures when using ads with AI-created content and to apply the rules to both candidate and issue-oriented ads. The commission also will ask for comments regarding a specific definition of IA-generated content.
Not Everyone is Onboard
At least one commissioner isn’t going along with the idea. Brendan Carr in a statement called the proposal “as misguided as it is unlawful,” implying that it is being offered to help Biden’s re-election efforts. Carr, a Republican, also questioned the FCC’s authority in this issue.
“Unlike Congress, the FCC cannot adopt uniform rules,” he said. “Indeed, the FCC can only tilt the playing field.”
However, Rosenworcel said the FCC’s authority comes from the Bipartisan Campaign Reform Act of 2002, which allows the agency to oversee political advertising.