Stay up to date with free updates
Just sign up artificial intelligence myFT Digest – delivered straight to your inbox.
Artificial intelligence (AI)-generated “deepfakes” that impersonate politicians or celebrities are far more prevalent than attempts to use AI in cyber attacks, according to a first study by Google’s DeepMind division into the most common malicious uses of cutting-edge technology.
The research found that creating realistic but fake images, videos and audio of people is almost twice as common as the next most common misuse of generative AI tools – using chatbots or other text-based tools to fabricate information and create disinformation to post online.
An analysis conducted in collaboration with Google’s research and development division Jigsaw found that the most common goal for malicious actors using generative AI was to shape or influence public opinion, accounting for 27 percent of uses, raising concerns about how deepfakes could impact this year’s global elections.
Deepfakes of British Chancellor Rishi Sunak and other world leaders have appeared on TikTok, X and Instagram in recent months. British voters face a general election next week.
Despite efforts by social media platforms to label and remove such content, there are widespread concerns that viewers will not recognise it as fake and that its spread could influence voters.
Ardi Djandzhevá, a research fellow at the Alan Turing Institute, called the paper’s findings “particularly pertinent” – that contamination of publicly available information with AI-generated content “has the potential to distort our collective understanding of socio-political reality.”
Janjeva added that “while there is uncertainty about the impact of deepfakes on voting behavior, the distortions could be difficult to detect in the short term and pose longer-term risks to our democracy.”
The study is the first from DeepMind, Google’s AI division led by Sir Demis Hassabis, to attempt to quantify the risks associated with using generative AI tools that have been rushed publicly by the world’s largest tech companies in the pursuit of huge profits.
As generative products like OpenAI’s ChatGPT and Google’s Gemini become more widely used, AI companies are beginning to police the large amounts of misinformation and potentially harmful or unethical content generated by their tools.
In May, OpenAI released research revealing that operations linked to Russia, China, Iran and Israel were using its tools to create and spread disinformation.
“There were legitimate concerns that these tools could facilitate very sophisticated cyber attacks,” said Nahema Marcial, lead author of the study and a researcher at Google DeepMind, “but what we saw was a fairly generalized misuse of GenAI. [such as deepfakes that] It might be a little less noticeable.”
Researchers from Google DeepMind and Jigsaw analyzed approximately 200 cases of abuse observed between January 2023 and March 2024, collected from social media platform X and Reddit, online blogs, and media reports of abuse.

The second most common motivation for abuse was to make money, whether it was offering a service to create deepfakes that include nude depictions of real people, or using generative AI to create large amounts of content such as fake news articles.
The investigation found that most incidents involved easily accessible tools that required “minimal technical expertise,” meaning more bad actors could exploit generative AI.
Google DeepMind’s research will impact how we improve evaluations to test the safety of models, and it is hoped that it will also influence how competitors and other stakeholders look at “how harm might manifest.”