Online influence operations based in Russia, China, Iran and Israel are using artificial intelligence to manipulate publics, according to a new report from OpenAI.
Bad actors are using OpenAI’s tools, including ChatGPT, to generate social media comments in multiple languages, fabricate names and bios for fake accounts, create cartoons and other imagery, and even debug code.
OpenAI’s report is the first from the company, which has quickly become one of the major players in AI. ChatGPT has garnered more than 100 million users since its public launch in November 2022.
But even though AI tools are helping those behind influence operations create more content, make fewer mistakes, and fake engagement with their posts, OpenAI says the operations it uncovered didn’t garner significant support from real people or reach large audiences. In some cases, the little genuine engagement their posts got was just responses from users accusing them of being fake.
“These operations may be using new technology, but they’re still wrestling with the age-old problem of how to deceive people,” said Ben Nimmo, principal researcher on OpenAI’s intelligence and investigations team.
This coincides with a quarterly threat report released Wednesday by Facebook owner Meta, which said that while some of the covert operations the company has recently thwarted used AI to generate images, videos and text, the use of cutting-edge technology has not affected the company’s ability to thwart efforts to manipulate people.
The boom in generative artificial intelligence, which can quickly and easily create realistic audio, video, images and text, is creating new avenues for fraud, cheating and manipulation, raising concerns that AI-enabled counterfeiting could disrupt elections, especially as billions of people around the world, including in the United States, India and the European Union, head to the polls this year.
In the past three months, OpenAI has banned accounts linked to five covert influence operations.[s] They manipulate public opinion and influence political outcomes without revealing the true identities or intentions of the actors behind them.”
This includes two operations well known to social media companies and researchers: Russian “doppelgangers” and a vast Chinese network dubbed “spamoflage.”
Doppelganger, which the U.S. Treasury Department has said is linked to the Kremlin, is known for impersonating legitimate news sites to undermine support for Ukraine. Spamouflage operates across a wide range of social media platforms and internet forums, spreading pro-China messages and attacking critics of Beijing. Last year, Facebook owner Meta said Spamouflage was the largest covert influence operation it had ever thwarted and linked it to Chinese law enforcement.
Both Doppelganger and Spamouflage used OpenAI tools to generate comments in multiple languages and post them on social media sites, and the Russian network used AI to translate articles from Russian into English and French, and to convert website articles into Facebook posts.
The Spamouflage accounts used AI to debug the code of websites targeting Chinese dissidents, analyze social media posts, and research news and current events. Posts from the fake Spamouflage accounts only received replies from other fake accounts in the same network.
Another previously unreported Russian network banned by OpenAI focused on spamming the messaging app Telegram. The network used OpenAI’s tools to debug the code of programs that auto-posted to Telegram, and used AI to generate comments that accounts posted on the app. Like Doppelganger, the objective of this operation was broadly geared toward undermining support for Ukraine through posts that influenced politics in the United States and Moldovan.
Another campaign that both OpenAI and Meta have reportedly shut down in recent months was traced back to Tel Aviv-based political marketing company Stoic. According to Meta, the fake accounts posed as Jewish students, African-Americans, and concerned citizens. They posted about the war in Gaza, praised the Israeli military, and criticized anti-Semitism on campuses and UN relief agencies for Palestinian refugees in the Gaza Strip. The posts were aimed at audiences in the United States, Canada, and Israel. Meta banned Stoic from its platform and sent the company a cease and desist letter.
OpenAI said the Israeli operation used AI to generate and edit posts and comments on Instagram, Facebook and X, and to create fictitious personas and bios for fake accounts. It also uncovered network activity targeting Indian elections.
None of the jobs OpenAI disrupted used exclusively AI-generated content: “This wasn’t a case of giving up human generation and moving to AI, but a mix of the two,” Nimmo says.
He said that while AI offers threat actors some advantages, such as increasing the amount of intelligence they can produce and improving translation between languages, it doesn’t help them overcome the main challenge of distribution.
“You can create content, but if you don’t have a distribution system to get it in front of people in a reliable way, it’s going to be hard to spread,” Nimmo said. “And what we’re seeing here is exactly that dynamic playing out.”
But companies like OpenAI must remain vigilant, he added: “Now is not the time to be complacent. History has shown that influence operations that have produced no results for years can suddenly erupt when no one is watching.”