election

Covert operations originating from Russia, China, Iran, and Israel used OpenAI’s generative AI tools to sway public opinion and influence political outcomes in the United States and elsewhere, according to the company.

In a 39-page report this week, OpenAI executives wrote that the company disrupted two such operations originating from Russia and one each from the other countries. They were detected over the past three months.

None of the influence campaigns seemed to have made much of an impact, according to OpenAI, but they highlight the ongoing concerns about the effect AI could have on elections around the world, from the United States to India.

“So far, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of their use of our services,” OpenAI wrote in the report, adding that based on its Breakout Scale assessment of such influence operations (IO) – with six being the highest – “none of the five operations included in our case studies scored higher than a 2, indicating activity on multiple platforms, but no breakout to authentic audiences.”

Different Tactics, Similar Goals

“While these campaigns differed widely in their origins, tactics, use of AI, and apparent aims, we identified a number of common trends that illustrate the current state of the IO threat, and the ways the defender community can use AI and more traditional tools to disrupt them,” the company wrote. “Overall, these trends reveal a threat landscape marked by evolution, not revolution. Threat actors are using our platform to improve their content and work more efficiently. But so far, they are still struggling to reach and engage authentic audiences.”

An operation out of Russia that OpenAI is calling Bad Grammar primarily used the Telegram encrypted messaging service to target the United States as well as Ukraine, Moldava, and the Baltic States of Estonia, Latvia, and Lithuania. Those behind Bad Grammar used OpenAi models to debug code to run a bot on Telegram that creates short political comments in Russian and English. Those comments were then posted to Telegram.

Another Russian campaign known as Doppelganger used the company’s models to create comments in English, French, German, Italian, and Polish that were then posted on X (formerly Twitter) and 9GAG, an online platform and social media site based in Hong Kong. The operators also translated and edited articles in English and French that were posted on websites linked to Doppelganger, generated headlines, and converted news articles into posts on Facebook.

According to the EU Disinfo Lab, the Doppelganger operation is allegedly run by two Russian companies, Struktura and Social Design Agency, that run other such campaigns, including ones called WarOnFakes and Ukraine Inc.

China, Iran, and Israel in the Mix

Spamouflage, which has been around for at least five years, is a Chinese network that used AI models research social media activity and to create texts in English, Chinese, Japanese, Korean, and other languages that were posted on such social media platforms as X, Medium, and Blogspot. The operators also used AI tools to debug code that manages databases and websites, including an unreported domain called revealscum[.]com.

The Iranian operation, known as the International Union of Virtual Media (IUVM), generated and translated long-form articles, headlines, and website tags using OpenAI services that were later published on a website linked to the bad actor, iuvmpress[.]com.

OpenAI called the Israeli campaign Zero Zeno, which the vendor said was run by STOIC, a business intelligence company in the country. The operation’s name comes from the pre-Socratic philosopher Zeno, who founded the practice of stoicism.

“The people behind Zero Zeno used our models to generate articles and comments that were then posted across multiple platforms, notably Instagram, Facebook, X, and websites associated with this operation,” OpenAI wrote.

In addition, “operation also used our models to create fictional personas and bios for social media based on certain variables such as age, gender and location, and to conduct research into people in Israel who commented publicly on the Histadrut trade union in Israel,” the vendor wrote. “Our models refused to supply personal information in response to these prompts.”

Meta Hits Back at Doppelganger, STOIC

Meta, the parent company of Facebook and Instagram, also reported about influence efforts by STOIC and Doppelganger. In its first quarter Adversarial Threat Report, the company wrote that it found in influence operations that used on both social media platforms  — including 510 Facebook accounts and 32 Instagram accounts – that were linked to STOIC.

“It is now banned from our platform,” Meta wrote in the report. “We issued a cease and desist letter to STOIC, demanding that they immediately stop activity that violates Meta’s policies.”

The Doppelganger group used Meta apps primarily to weaken international support for Ukraine in its war with Russia.

“This campaign continues to be a ‘smash-and-grab’ effort expending a large amount of resources in the face of a very high detection rate and daily loss of assets,” the company wrote. “Such persistence is expected for an influence campaign run ‘at the direction of the Russian Presidential Administration’ in wartime.”

Meta said it has been monitoring, detecting, and blocking Doppelganger’s activities since 2022, nothing that those behind the operation “largely ceased to engage in … tactics on our apps, while still actively deploying them elsewhere online.”

Such activities included spoofed websites impersonating news organizations and government agencies, commenting on posts by other people, putting fictitious brands on the platform, such as Reliable Recent News, and link seeding to drive traffic off the platform.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Cloud Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY