NEW DELHI: ChatGPT developer OpenAI said it acted within 24 hours to thwart the deceptive use of AI in a covert operation focused on India’s elections, which it said did not lead to a significant increase in viewership.
In a report on its website, OpenAI said Israeli political campaign management company STOIC produced content about the Indian elections in parallel with content about the Gaza conflict.
“In May, the network began focusing on India, making comments criticising the ruling Bharatiya Janata Party and praising the opposition Indian National Congress,” it said.
“In May, we disrupted several election-focused activities within 24 hours of the start of the Indian elections.”
OpenAI announced that it had banned a series of Israeli-run accounts that were being used to create and edit content for influence operations across X, Facebook, Instagram, its website and YouTube.
“The operation targeted audiences in Canada, the United States and Israel with English and Hebrew content. In early May, it began targeting audiences in India with English content.”
No details were given.
Commenting on the report, India’s Minister of State for Electronics and Technology Rajeev Chandrasekhar said: “It is absolutely clear that @BJP4India has been and continues to be the target of influence operations, misinformation and foreign interference perpetrated by or on behalf of some political parties in India.”
“This is a very dangerous threat to our democracy. It is clear that this is being driven by vested interests both in India and abroad and needs to be thoroughly scrutinised, investigated and exposed. My view now is that these platforms could have exposed this much earlier and not just before the elections are over,” he added.
OpenAI said it is committed to developing AI that is safe and broadly beneficial. “Our investigation into suspected covert influence activities (IO) is part of a broader strategy to achieve our goal of safe AI deployment.”
OpenAI said it is committed to increasing transparency and enforcement of policies to prevent misuse of AI-generated content, especially when it comes to detecting and disrupting covert influence operations (IO) that attempt to manipulate public opinion or affect political outcomes without revealing the true identity or intent of the parties behind them.
“Over the past three months, we have disrupted five covert IOs that attempted to use our models to support deceptive activities across the internet. As of May 2024, these campaigns do not appear to have significantly increased audience engagement or reach as a result of our services,” the company said.
OpenAI explained its operations and said that it had disrupted the operations of an Israeli commercial company called STOIC, but only the operations were disrupted, not the company itself.
“We named the operation Zero Zeno, after the founder of Stoic philosophy. The people behind Zero Zeno used our model to generate articles and commentary that were posted across multiple platforms, including Instagram, Facebook, X, and websites associated with the operation,” the statement said.
The content posted by these various campaigns focused on a wide range of issues, including Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, Western politics, and criticism of the Chinese government by Chinese dissidents and foreign governments.
OpenAI said it is taking a multi-pronged approach to combating misuse of its platform, including monitoring and disrupting threat actors, including nation-state aligned groups and advanced persistent threats: “We are investing in technology and teams to identify and disrupt actors like those discussed here, and are leveraging AI tools to help counter abuse.”
It will collaborate with other organizations in the AI ecosystem to flag potential misuses of AI and share its learnings with the public.
Published May 31, 2024 15:48 IST