Top Line
OpenAI identified and disrupted five influence operations involving users in Russia, China, Iran, and Israel who were using its AI technologies, including ChatGPT, to generate content intended to deceptively influence public opinion and political discourse.
Key Facts
The company said those behind the operation used AI to generate content that was shared across various platforms, including bot code, and generated fake replies to social media posts to create the illusion of engagement.
Two operations originated from Russia: the well-known “Doppelganger” operation, which uses AI to generate fake content and comments, and a “previously unreported” operation the company calls “Bad Grammar,” which OpenAI says coded a bot that used its models to post short political comments on Telegram.
OpenAI said that a Chinese group known as “Spamouflage,” notorious for its influence activities on Facebook and Instagram, used its models to study social media activity and generate text-based content in multiple languages and across multiple platforms.
OpenAI linked one campaign to Stoic, an Israeli political campaign firm that used AI to generate posts about Gaza on Instagram, Facebook and X (formerly Twitter) targeted to audiences in Canada, the US and Israel.
In its first report on the subject, OpenAI said the operation also generated content about Russia’s invasion of Ukraine, Indian elections, Western politics and criticism of the Chinese government.
Contra
OpenAI concluded that AI did little to expand the reach of these influence operations: Citing the Brookings Institution’s “Breakout Scale,” which measures the impact of influence operations, OpenAI said none of the operations it identified scored above two out of five, meaning the generated content never spread to real user communities.
Main Background
Influence efforts through social media platforms have been a focus of the tech industry since 2016, when Russian-backed groups were identified as trying to sow discord in favor of then-candidate Donald Trump ahead of the presidential election. Since then, tech companies have been monitoring similar activity and regularly documenting it in reports. Last month, Microsoft found evidence that a new Russian-backed election influence campaign is underway ahead of the 2024 election.
tangent
Meta also released a report this week linking The Stoics to influence operations efforts, claiming to have taken down hundreds of fake Facebook and Instagram accounts linked to The Stoics. The Meta report also identified some of the content posted by these accounts as being AI-sourced.