LLMs, automation, U.S., business, leaders, transparency, trust, specialized LLMs, LLMs, AI, AI tech and LLMs government, combat, AI trust, networks

Two federal agencies are taking steps to develop ways to protect consumers against scammers using AI technology in robocalls and in voice-cloning schemes, another step in regulators’ growing interest in examining the pros and cons of advanced technologies like AI and machine learning.

The Federal Communications Commission (FCC) this month issued a Notice of Inquiry (NOI) to collect information about how AI tools are being used in unwanted or illegal calls and text messages, which are regulated by the Telephone Consumer Protection Act (TCPA).

“We seek to understand how these technologies might affect the existing regulatory landscape that protects consumers from unwanted and illegal robocalls and robotexts,” the FCC wrote in its 16-page notice. “In this context, we seek information that could inform policies that anticipate how AI could help protect consumers against unwanted communications and how it could do the opposite.”

At the same time, the Federal Trade Commission (FTC) is ramping up what it’s calling the Voice Cloning Challenge in hopes of promoting ideas to prevent, monitor and evaluate malicious uses of voice cloning technology with the end goal of protecting consumers from AI-based fraud and other scams.

“The challenge is crafted in a way that ensures companies are responsible for the first- and second-order effects of the products they release,” FTC CTO Stephanie T. Nguyen said in a statement.

Regulators Turning a Focus on AI

Together, they represent the federal government’s growing interest in ensuring that AI technologies are developed and used responsibly and that solutions are put in place to protect organizations and individuals from the potential harm that all forms of AI – including ones that enable replication of natural voices – can deliver.

In terms of AI voice technologies, there are myriad benefits, including giving a voice to those who can’t speak to eliminating language barriers. But they also can be used in a range of scams. Developing solutions and understanding to determine when AI voice tools are being used in robocalls or other schemes will be important as the global AI voice generator market could hit almost $5 billion by 2032, with the voice cloning space reaching almost $1.8 billion by 2029, all fueled in large part by advances in natural language processing and deep learning.

Like other cyberthreats, AI is making it easier for bad actors to pull off their crimes. The FCC’s inquiry is looking at how AI is being used in robocalls, not only by emulating human voices but also enabling models to interact with victims. The agency also wants to know how AI can be used to detect and block unwanted robocalls and robotexts and enable the FCC to better enforce the TCPA.

“Our inquiry includes defining AI in this context, the current state of AI use in calling and texting, the impact of emerging AI technologies on consumer privacy rights under the TCPA, and, if appropriate, the Commission’s next steps to address these issues,” the FCC wrote.

The Threat of Family-Emergency Schemes

The FTC is trying to tackle the cyberthreats arising out of voice cloning, which the agency noted has rapidly improved with advancements in generative AI text-to-speech models. A growing problem is the use of voice cloning by hackers to impersonate family members, friends, or other associates to lure them into giving them money or information.

There are numerous stories of people who receive calls that sound like family members or friends in an emergency saying they need money immediately to get them out of a jam. The FTC in March issued a notice about the threat.

“Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie,” Alvaro Puig, consumer education specialist with the FTC, wrote in the advisory. “We’re living with it, here and now. A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member’s voice — which he could get from content posted online — and a voice-cloning program. When the scammer calls you, he’ll sound just like your loved one.”

The FTC said it will accept submissions to the challenge from January 2-12 with the proposals needing to deal with either prevention or authentication, real-time detection or monitoring, or post-use evaluation for checking if an audio clip contains cloned voices.

The winner for the challenge will get $25,000.

Lawmakers are Taking Notice

Earlier this month, the U.S. Senate Special Committee on Aging released a report on the top scams of 2022, nothing that “though many of the same types of scams present themselves year after year, the methods through which scammers contact victims have diversified – Artificial Intelligence (AI) and social media now play a prominent role.”

The panel noted such AI technologies as chatbots, voice cloning and deepfake videos as new tools being used by bad actors and added that AI is also improving the effectiveness of typical threats, including phishing, romance scams and family-emergency schemes.

It also said that robocalls and unsolicited calls represented the third-largest category of fraud at 7%, behind scams related to financial services and health care. Person-in-need and grandparent scams came in at number 10, at 2%.

Also this month, Sens. Amy Klobuchar (D-MS) and Susan Collins (R-ME) called on both the FTC and FCC to continue raising the awareness about voice-cloning scams and to develop tools for preventing them.

“These scams are putting too many parents and grandparents through this emotional toll, and as technology improves, voice clones will only become more convincing,” the lawmakers wrote. “More can be done to educate Americans about these frauds and to help prevent them from happening.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY