![artificial intelligence, AI accenture, multimodal ai, AI, artificial intelligence](https://techstrong.ai/wp-content/uploads/2023/10/steve-johnson-ZPOoDQc8yMw-unsplash-scaled-e1718282244193.jpg)
When actor Tom Hanks recently raised the alarm about an ad using an unauthorized video of himself to promote a dental plan, he highlighted the growing use of fake AI images, text and voice being used to scam consumers.
“Beware!” Hanks wrote on his Instagram account. “There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it.”
Hanks isn’t alone as a target of AI scammers. Celebrities like American broadcast journalist Gayle King and YouTube personality MrBeast, known for his giveaways, have been the subjects of deepfake-style AI impersonation. The scope of the problem is huge. According to Federal Trade Commission chair Lina Khan, AI has “turbocharged” frauds and scams. The FTC reports Americans lost $9 billion to fraud in 2022, an increase of 150 percent over the past two years.
The youthful MrBeast went right to the heart of the issue. “Lots of people are getting these deepfakes scams of me . . . are social media platforms ready to handle the rise of deepfakes? This is a serious problem!”
Despite assurances from Meta and others, the answer seems to be “no” given how widely AI scams are proliferating. As cybersecurity expert Pete Nicoletti of Check Point Software Technologies noted in a BBC interview, AI gives scammers a new set of tools that can be used against a population that is generally outgunned from a technical standpoint. Non-English speakers can now easily target English speakers, for example, giving rise to an increase in the number of potential criminals operating scams.
Among the most distressing scams is one in which the voice of loved one has been cloned to demand money for her return or continued safety. In one widely reported incident, a woman received a call from her daughter who appeared to be in harm’s way. In reality, her daughter was sound asleep in her bedroom.
For the moment, consumers have little recourse in a fight they are ill-equipped to handle. In most countries, using AI to copy or alter images isn’t technically illegal, although a Deepfakes Accountability Act is in the works in the U.S. that would punish criminal perpetrators. And the issue goes to the heart of actors currently on strike, wherein the use of AI images is a key concern.
Making matters more difficult is that Instagram, Facebook, Snapchat, TikTok and other social media platforms are counting on AI-generated image sharing to be a key lure to their sites. Can of Soup, for example, is an invitation-only social network where people show photos of themselves that have been heavily modified by AI for entertainment purposes. Meta is now using AI tools to generate photorealistic “stickers” that can be made in about five seconds. An app called BeFake clearly indicates the direction social media is heading. Voice cloning software, meanwhile, has legitimate uses like helping those with speech impediments or allowing authors to make audio versions of their books.
For now, consumers are pretty much on their own when it comes to matching wits with fraudsters, as fraud detection software isn’t much help. Likewise, voice cloning can be detected using a spectrogram for side-by-side comparison, but that’s beyond the means of most. OpenAI, for instance, admitted last month that its own fraud detection software doesn’t work, as it is unable to tell the difference between human and AI-generated text. Universities, for the most part, have given up on AI detection tools for programs like ChatGPT due to an unacceptable rate of false positives.
The onus is on consumers to protect themselves. Nicoletti suggest families use a codeword to verify the authenticity of a call and keeping software media accounts set to private. And if faced with a dubious video image of a person, ask to see the back of their head as most fraudsters won’t go to that much trouble in a fake rendering. For many, these security tricks may seem like a low tech answer to a high tech problem.
Numerous researchers, meanwhile, are racing to develop effective fake AI detectors. Among them is MIT which is developing a DeepFake algorithm to identify fraudulent images. In the course of their research MIT has developed eight questions to help people identify fake images that revolve around lighting, facial hair, skin quality, blink rate and other cues. It’s safe bet that fraudsters are developing their own answers.