impersonation, attacks, deepfake

Microsoft executives are urging Congress to pass legislation that would keep cybercriminals from using AI-generated deepfakes from defrauding U.S. citizens in a push that comes as instances of fake video and audio of high-profile people like Vice President Kamala Harris continue to pile up.

Brad Smith, the IT giant’s vice chair and president, wrote in a blog post that “deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation – especially to target kids and seniors.”

Tech companies, nonprofit groups, and others are trying to address the issue, but what also needs to happen are new laws that can combat deepfake fraud, Smith wrote.

“In short, we need new laws to help stop bad actors from using deepfakes to defraud seniors or abuse children,” he wrote. “While we and others have rightfully been focused on deepfakes used in election interference, the broad role they play in these other types of crime and abuse needs equal attention.”

Smith wants to see statutes that will give law enforcement agencies the tools they need to arrest and prosecute people who use the generative AI capabilities to create deepfakes to steal or hurt Americans. In addition, lawmakers should require AI system providers state-of-the-art tools fore labeling synthetic content.

“This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated,” Smith wrote. “We find ourselves at a moment in history when anyone with access to the Internet can use AI tools to create a highly realistic piece of synthetic media that can be used to deceive: a voice clone of a family member, a deepfake image of a political candidate, or even a doctored government document.”

A Tool and a Weapon

He added that “AI has made manipulating media significantly easier – quicker, more accessible, and requiring little skill. As swiftly as AI technology has become a tool, it has become a weapon.”

He is pointing to three particular areas that legislation needs to focus on: protecting elections, protecting seniors and consumers from online fraud, and protecting children from online exploitation. The industry and policymakers need to find ways to promote authenticity in what people see and hear, detect and respond to abusive deepfakes, and give the public what it needs to learn about the dangers of deepfakes.

Microsoft took a deeper dive into the issue with a 52-page report that accompanies Smith’s blog post.

Creating Deepfake Policies

There has been work in the government to try to rein in the abuse of deepfakes. The Senate last week passed the Disrupt Explicit Forged Images and Non-Consensual Edits (Defiance) Act, which will allow victims of sexually explicit deepfakes made without their consent to sue those who create and share them. The bill now goes to the House of Representatives.

A group of senators on Wednesday introduced another bill – the long awaited NO FAKES Act – aimed at protecting artists and others from nonconsensual AI deepfakes that use their images, likenesses, or voices. Actress Scarlett Johanson and singer Taylor Swift both have been victims of such incidents.

In addition, the Federal Communications Commission (FCC) continues to propose policies for the use of AI-generated voices in robocalls.

The Public’s Need to Believe

The U.S. Department of Homeland Security (DHS) issued a 43-page report about the dangers of deepfakes, writing that “the threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people’s natural inclination to believe what they see, and as a result deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading mis/disinformation.”

Meanwhile, the Federal Trade Commission (FTC) this year has also worked is putting policies in place aimed at prohibiting people from impersonating others, including generative AI technologies.

Unsurprisingly, deepfakes continue to be a growing problem on the political seen. Elon Musk on X (formerly Twitter) shared a deepfake video of Vice President Kamala Harris after she became the Democratic Party’s presumptive presidential nominee, a decision that may have violated the social platform’s policies against such media.

Musk also was a victim of a deepfake scam, with an AI likeness of his voice being used to narrate a three-part series being distributed on Telegram, called “Olympics has Fallen II: The End of Thomas Bach,” and is being used to discredit the Paris Olympics leadership.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY