NEW YORK — As crucial elections approach in the United States and the European Union, publicly available artificial intelligence tools could easily be weaponized to inject convincing election lies into the voices of leading politicians, a digital civil rights group said Friday.
Researchers at the Washington, DC-based Countering Digital Hate Center tested six of the most popular AI voice cloning tools to see if they could generate audio clips of five false statements about the election in the voices of eight prominent American and European politicians.
Out of 240 total tests, the tool produced convincing voice clones in 193 cases, or 80 percent of the time, the researchers found. In one video, a fake U.S. President Joe Biden says that election officials are counting his votes twice. In another, a fake French President Emmanuel Macron warns people not to vote because of bomb threats at polling stations.
The findings reveal significant gaps in safeguards against the use of AI-generated voices to deceive voters, a threat that has become a growing concern among experts as the technology becomes more sophisticated and accessible. While some tools have rules and technical barriers in place to stop the generation of disinformation about the election, the researchers found that many of those barriers can be easily circumvented with simple workarounds.
Only one of the companies whose tools the researchers used responded to multiple requests for comment: ElevenLabs, which said it is constantly looking for ways to strengthen its security measures.
There are few laws to prevent the misuse of these tools, and the lack of corporate self-regulation leaves voters at risk of AI-generated deception in a year of crucial democratic elections around the world: EU voters are casting their ballots in parliamentary elections in less than a week, and the US is holding its presidential primaries ahead of this fall’s election.
“It’s all too easy to use these platforms to invent lies and force politicians to deny and back down over and over again,” said Imran Ahmed, CEO of the center. “Unfortunately, our democracy is being sold off for the naked greed of desperate AI companies trying to be first to market… even though they know their platforms are not safe.”
The center, a nonprofit with offices in the U.S., the U.K. and Belgium, conducted the study in May. Using online analytics tool Semrush, researchers identified six publicly available AI voice duplication tools with the highest monthly organic web traffic: ElevenLabs, Speechify, PlayHT, Descript, Invideo AI and Veed.
Next, participants were provided with actual audio clips of politicians speaking, and they used the tool to imitate the politicians’ voices and make five unsubstantiated statements.
One of the statements warned voters to stay home after a bomb threat was made at a polling station, while the other four were various admissions of election rigging, lying, misappropriating campaign funds and taking strong drugs that caused memory loss.
In addition to Biden and Macron, the tool features realistic voice reproductions of US Vice President Kamala Harris, former US President Donald Trump, UK Chancellor Rishi Sunak, UK Labour Party leader Keir Starmer, European Commission President Ursula von der Leyen and EU Internal Market Commissioner Thierry Breton.
“None of the AI voice cloning tools had sufficient safeguards to prevent the cloning of politicians’ voices or the creation of election disinformation,” the report said.
Tools like Descript, Invideo AI, and Veed require you to upload a unique audio sample before cloning your voice to prevent them from cloning a voice that isn’t your own, but researchers found they could easily get around this barrier by using another AI voice cloning tool to generate a unique sample.
The tool, Invideo AI, not only created the false statements requested by the center, but also extrapolated them to create further disinformation.
The company added some of its own text when producing an audio clip instructing a clone of Biden’s voice to warn people about bomb threats at polling places.
“This is not a call to abandon democracy, but a plea to put safety first,” Biden’s voice said in the fake audio clip. “An election that celebrates our democratic rights will only be postponed, not denied.”
The researchers found that Speechify and PlayHT performed the worst overall in terms of safety, producing believable fake voices in all 40 test runs.
ElevenLabs performed best, being the only tool that blocked duplicate voices of British and US politicians, but the tool still allowed the creation of fake voices imitating those of prominent EU politicians, the report said.
Aleksandra Pedraszewska, head of AI safety at Eleven Labs, said in an emailed statement that the company welcomes the report and the awareness it raises about generative AI manipulation.
She said ElevenLabs knows it still has work to do and is “continuously improving the capabilities of our safety measures,” including the company’s blocking features.
“We hope other voice AI platforms will follow suit and roll out similar measures without delay,” she said.
Other companies mentioned in the report did not respond to emailed requests for comment.
The findings come after AI-generated audio clips have already been used to sway voters in elections around the world.
In the fall of 2023, just days before Slovak parliamentary elections, an audio clip sounding like the leader of the Liberal Party was widely shared on social media. The deepfake purportedly contained the leader talking about rising beer prices and vote rigging.
Earlier this year, an AI-generated robocall imitated Biden’s voice urging New Hampshire primary voters to stay home and “save” their vote in November. A New Orleans magician who creates audio for Democratic political consultants showed The Associated Press how he created it using software from ElevenLabs.
AI-generated voices have long been a favorite of bad actors, experts say, thanks in part to rapid advances in the technology: it only takes a few seconds of real audio to create a realistic fake.
Other AI-generated media has also concerned experts, lawmakers and tech industry leaders. OpenAI, the developer of ChatGPT and other popular generative AI tools, said Thursday that it had discovered and disrupted five online campaigns that used its technology to sway public opinion on political issues.
Ahmed, the Center for Countering Digital Hate CEO, said he would like to see AI audio duplication platforms step up security measures and become more proactive about transparency, such as by making public their libraries of created audio clips so that people can check when suspicious audio goes viral online.
He also said lawmakers need to act. The U.S. Congress has yet to pass legislation to regulate AI in elections, and the European Union has passed a wide-ranging artificial intelligence bill that is due to come into force over the next two years, but it does not specifically mention voice cloning tools.
“Lawmakers need to work to ensure that minimum standards are met,” Ahmed said. “The threat that disinformation poses to our elections is not just that it can spark minor political incidents, but that it can lead people to distrust what they see and hear.”
___
The Associated Press receives support from several private foundations to strengthen its commentary coverage of elections and democracy. Learn more about the AP Democracy Initiative here. The Associated Press is solely responsible for all content.