ohpenAI identified and took down five covert influence operations based in Russia, China, Iran and Israel that were using artificial intelligence tools to manipulate public opinion, the company announced Thursday.
In a new report, OpenAI detailed how these groups, some of which are linked to known propaganda campaigns, used its tools for a variety of “deception activities,” including generating social media comments, articles, and images in multiple languages, creating names and bios for fake accounts, debugging code, and translating and proofreading text. These networks include: He has defended the war in Gaza and Russia’s invasion of Ukraine, criticized Chinese dissidents, They attempted to interfere in politics and sway public opinion in India, Europe and the United States. These influence operations targeted a wide range of online platforms, including sites such as X (formerly Twitter), Telegram, Facebook, Medium and Blogspot. ““None of them managed to attract a significant audience,” according to OpenAI analysts.
The company’s first report comes amid global concern about the potential impact of AI tools on more than 64 elections around the world, including the U.S. presidential election this November. In one example cited in the report, a Russian group on Telegram posted, “We’re tired of brain-damaged idiots playing games while Americans suffer. Washington needs to get its priorities straight or they’ll find themselves thinking Texas Fury!”
The examples cited by OpenAI analysts suggest foreign actors are using AI tools for the same types of online influence operations they have been conducting for a decade, with a focus on using fake accounts, comments and articles to shape public opinion and manipulate political outcomes. “These trends indicate that the threat landscape is characterized by evolution rather than revolution,” Ben Nimmo, lead researcher on OpenAI’s intelligence investigations team, wrote in the report. “Threat actors are leveraging our platform to improve their content and work more efficiently.”
read more: Hackers may use ChatGPT to target the 2024 election
ChatGPT developer OpenAI says it currently has more than 100 million weekly active users. The company’s tool makes it easier and faster to create large amounts of content, which can be used to hide language errors or generate fake engagement.
One of the Russian influence campaigns that OpenAI disrupted, which the company named “Bad Grammar,” ran a Telegram bot that used its AI models to debug its code and create short political comments in English and Russian. The company said the operation targeted Ukraine, Moldova, the United States, and the Baltic states. Another Russian operation known as “Doppelganger,” which the U.S. Treasury Department has linked to the Kremlin, used OpenAI’s models to generate headlines and convert news articles into Facebook posts, creating comments in English, French, German, Italian, and Polish. Spamouflage, a well-known Chinese network, also used OpenAI’s tools to survey social media activity and generate text in Chinese, English, Japanese, and Korean for posting on multiple platforms, including X, Medium, and Blogspot.
OpenAI also detailed how Stoic, an Israeli political marketing firm based in Tel Aviv, used its tools to create pro-Israel content about the Gaza war. Nicknamed “Zero Zeno,” the campaign targeted audiences in the United States, Canada and Israel. On Wednesday, Meta, the parent company of Facebook and Instagram, said it had removed 510 Facebook accounts and 32 Instagram accounts linked to the same company. The group of fake accounts, including accounts posing as African-Americans and students from the United States and Canada, often replied to public figures and media organizations with posts praising Israel, criticizing anti-Semitism at universities and denouncing “radical Islam.” They did not appear to garner significant engagement, according to OpenAI. “It’s not cool that these radical ideas are ruining the atmosphere in our country,” one of the posts in the report said.
OpenAI said it is using its own AI-powered tools to more efficiently investigate and disrupt such foreign influence operations. “The investigation described in the attached report was completed in days, not weeks or months, thanks to our tools,” the company said Thursday. It also noted that despite the rapid evolution of AI tools, human error remains a factor. “AI can change the toolkits used by human operators, but it does not change the operators themselves,” OpenAI said. “While it is important to recognize the changes in the tools threat actors use, we should not lose sight of the human limitations that may affect their activities and decision-making.”