The Federal Trade Commission, which already has put some policies in place governing the use of AI voice-cloning technology in robocalls, now wants to create a set of rules governing such robocalls, including formally defining what constitutes an AI-generated robocall.
At the Federal Communications Commission (FCC) meeting next month, Chairwoman Jessica Rosenworcel will ask the full five-member commission for approval to seek comment on a range of rules that include not only getting a definition down but also would require callers to disclose to consumers their use of AI in their robocalls and support developing technologies that can alert consumers about illegal AI-generated robocalls or texts.
In addition, they would safeguard the positive aspects of AI that help people with disabilities use communications networks, ensuring that such technologies could grow without the threat of liability under the Telephone Consumer Protection Act (TCPA).
“Bad actors are already using AI technology in robocalls to mislead consumers and misinform the public,” Rosenworcel said in a statement this week. “That’s why we want to put in place rules that empower consumers to avoid this junk and make informed decisions.”
The FCC said that requiring companies to alert consumers when using generative AI in robocalls “would provide consumers with an opportunity to identify and avoid those calls that contain an enhanced risk of fraud and other scams.”
Keeping Up with Innovation
Like other federal agencies under the Biden Administration, the commission is trying to set policies around the use of generative AI technologies that are evolving at a breakneck pace and stretching into most aspects of people’s lives and businesses.
The new rules for AI and robocalls would follow steps the FCC took earlier this year to get better control of the issue, which came into the spotlight in January when robocalls using AI to mimic the voice of President Biden were sent to Democratic voters in the days leading up the presidential primary in February urging them not to vote.
The calls were traced back to a political consultant who created the messages and a Texas-based company, Lingo Telecom, that transmitted them. The FCC in May suggested a $6 million fine for the consultant, Steven Kramer, and a $2 million fine for Lingo Telecom.
Early Steps
In early February, the commissioners said they made the use of AI voice-cloning technologies in robocalls illegal, claiming they were considered “artificial” under the TCPA. At the time, Rosenworcel said the FCC was “putting the fraudsters behind these robocalls on notice.”
However, some organizations noted that not all AI-generated robocalls were banned. The Elias Law Group, known for fighting in court attempts to infringe on citizens’ voting rights, wrote in a blog post that the FCC made it so AI-created robocalls were subjects to the same regulations as other robocalls, so some would be allowed under the TCPA rules.
Another law firm, Cooley LLP, agreed that the FCC didn’t change or expand the TCPA rules. It deemed the use of AI voice cloning as another way of simulating the human voice and could be used through the exemptions laid out in the law, so it was uncertain the ruling would stop anyone from spoofing voices during political campaigns.
That said, “parties should, however, take the FCC’s actions as an indication of its concern about the use of AI to create or send robocalls,” the law firm wrote. “Accordingly, any use of artificial voice technology to create telephone messages, or any calls made using a prerecorded human voice, should be reviewed by counsel.”
Getting the Carriers Involved
The FCC in June also sent letters to AT&T, Verizon and other carriers asking them to outline steps they’re taking to prevent fraudulent AI-based robocalls used for political purposes. The commission asked about such points as authenticating calls, customer practices and resources dedicated to identifying generative AI-created voices.
The commission also is considering whether to require that companies disclose when AI technologies are used in political ads for radio and television.
“The use of AI is expected to play a substantial role in the creation of political ads in 2024 and beyond, but the use of AI-generated content in political ads also creates a potential for providing deceptive information to voters, in particular, the potential use of ‘deep fakes’ – altered images, videos or audio recordings that depict people doing or saying things that did not actually do or say, or events that did not actually occur,” the FCC wrote in May.