Adrian Perkins was running for reelection as mayor of Shreveport, Louisiana, when he was surprised by harsh campaign criticism.
The satirical TV ad, funded by a rival political action committee, used artificial intelligence to portray Perkins as a high school student called into the principal’s office. Instead of reprimanding him for cheating on a test or getting into a fight, the principal berated Perkins for failing to keep the community safe and create jobs.
In the video, Perkins’ face is superimposed onto the body of an actor portraying him. The ad was reportedly created with “deep learning computer technology,” but Perkins said it was powerful and resonated with voters. He lacked the funds or campaign staff to fight it, which he believes is one of the many reasons he lost the 2022 presidential election. A representative for the group that produced the ad did not respond to a request for comment.
“Deepfake ads 100% impacted our campaign because we’re a lower-tier district and we have fewer resources,” Perkins, a Democrat, said. “We’ve had to be selective about where we put our efforts.”
While such attacks are a staple of cutthroat political campaigns, the ad targeted at Perkins was notable because it was believed to be one of the first instances of AI deepfakes being used in a US election campaign, and foreshadowed the dilemma facing many candidates in state and local elections this year as generative AI becomes more widespread and easier to use.
The technology can do everything from streamlining routine tasks of election campaigns to creating fake images, videos, and audio. It has already been deployed in some domestic elections and is becoming more widespread in elections around the world. Despite its power as a misleading tool, efforts to regulate it have been piecemeal and slow, a gap that could have the greatest impact on lower-profile elections at the bottom of the polls.
For candidates running these campaigns, artificial intelligence is a double-edged sword: Cheap, easy-to-use AI models can help them save money and time on routine tasks. But candidates often lack the manpower and expertise to counter AI-generated misinformation, raising concerns that last-minute deepfakes could mislead voters and tarnish the outcome of closely decided elections.
“AI-enabled threats affect close or low-profile elections, where small changes matter and resources to correct misleading reporting are often fewer,” said Josh Lawson, director of AI and Democracy at the Aspen Institute.
Lack of national security
Some local candidates have already faced criticism for misleading use of AI, from a Republican senatorial candidate in Tennessee who used AI-generated mugshots to make himself appear slimmer and younger, to the Democratic sheriff in Philadelphia who promoted ChatGPT-generated fake news stories during his reelection campaign.
One challenge in separating fact from fiction is the decline of local news outlets. Many places have seen a big drop in coverage of candidates running for state and local office, especially in-depth coverage of their backgrounds and campaigns. A lack of knowledge about the candidates can make voters more likely to believe misinformation, said Sen. Mark Warner of Virginia.
The Democrat, who has worked extensively on AI-related legislation as chairman of the Senate Intelligence Committee, said AI-generated misinformation is easier to spot and address in high-profile elections because of the increased scrutiny it faces. When AI-generated robocalls impersonated President Joe Biden during the New Hampshire primary this year and urged voters to stay home from the polls, it garnered swift media coverage, investigations and serious consequences for those behind it.
More than a third of states have passed laws regulating artificial intelligence in politics, and bills aimed specifically at fighting election-related deepfakes have received bipartisan support in the states that have passed them, according to Public Citizen, a nonprofit consumer advocacy group.
But Congress has yet to act, even though several bipartisan groups of lawmakers have proposed such legislation.
“Congress is pathetic,” Warner said, adding that he was pessimistic that Congress would pass legislation this year to protect elections from AI interference.
Travis Brim, executive director of the Association of Democratic Secretaries of State, said the threat of AI-driven misinformation in lower-level elections is an evolving issue that “people are still working through to figure out the best way to address it.”
“This is a really difficult issue, which is why Democratic secretaries have acted quickly to pass actual legislation with actual penalties for misuse of AI,” Brim said.
A spokesman for the Republican Secretaries of State Committee did not respond to AP’s request for comment.
How do you regulate honesty?
While experts and lawmakers worry that generative AI attacks could skew election results, some candidates running for state and local office say AI tools are proving to be extremely useful in their campaigns. Powerful computer systems, software and processes can mimic aspects of human work and cognition.
Glenn Cook, a Republican running for state representative in southeastern Georgia, has a lower name recognition and far less campaign funding than the incumbent he faces in Tuesday’s runoff election, so he’s invested in digital consultants and is using cheap, publicly available generative AI models to generate much of his campaign content.
On his website, AI-generated articles are interspersed with AI-generated images of community members smiling and conversing in non-existent ways, and AI-generated podcast episodes use cloned versions of his voice to narrate his policy positions.
Cook said he double-checks everything before making it public. The savings in both time and money have allowed him to do more door-to-door canvassing and participate in more in-person canvassing in his district.
“My wife and I have repaired 4,500 doors here,” he said. “It has given us the ability to do a lot more.”
Cook’s opponent, Republican state Assemblyman Steven Sainz, said he believes Cook is “hiding behind a robot instead of honestly expressing his views to his constituents.”
“I am running my campaign on real world results, not artificial promises,” Sainz said, adding that his campaign does not use AI.
Republican voters in the district were unsure how to feel about the use of AI in the campaign, but said a candidate’s values and outreach were paramount. Patricia Lowell, a retired Cook supporter, said she liked that Cook came to her area three or four times during the campaign, while Mike Perry, a self-employed Sainz supporter, said he felt a more personal touch from Sainz.
He said the growing use of AI in politics was inevitable, but questioned how voters would be able to distinguish what is true from what is not.
“This is freedom of speech. I don’t mean to deny freedom of speech, but it all comes down to the sincerity of the person speaking,” he said. “I don’t know how you regulate sincerity. It’s pretty difficult.”
Local campaigns are weak
Digital companies that sell AI models for political campaigns told The Associated Press that so far most uses of AI in local elections have been minimal, designed to make tedious tasks like analyzing survey data or writing social media copy that meets certain character limits more efficient.
Political consultants are increasingly testing AI tools to see what works, according to a new report from a team led by researchers at the University of Texas at Austin. More than 20 political activists across ideological lines told the researchers they were trying out generative AI models in this year’s election campaigns, but at the same time, they worried that less scrupulous activists were doing the same.
“Local elections are going to be a lot harder because people are going to attack them,” said Zellie Martin, the report’s lead author and a senior fellow at the university’s Media Engagement Center. “And while Biden and Trump have far more resources to fend off attacks, do they have the tools to fight back?”
There are big differences in personnel, funding and expertise between low-level races for state legislators, mayors, school boards and other local positions and races for federal office. A local election might have just a handful of staffers, while a competitive U.S. House and Senate race could have dozens, and a presidential election could have thousands by the end of the campaign.
Both the Biden campaign and former President Donald Trump’s campaign are experimenting with AI to bolster fundraising and voter outreach. Biden campaign spokeswoman Mia Ellenberg said they also plan to debunk AI-generated misinformation. A Trump campaign spokeswoman did not respond to AP’s questions about how they handle AI-generated misinformation.
Perkins, a former mayor of Shreveport, led a small team that decided to ignore the attacks and continue campaigning when a deepfake of him being dragged into the principal’s office aired on local television. At the time, Perkins saw the deepfake ads against him as a classic dirty trick, but just two years into his campaign, the rise of AI made him realize its power as a tool to deceive voters.
“In politics, people are always going to push the envelope a little bit to be effective,” he said. “We had no idea how significant it would be.”
___
Burke reported from San Francisco, Merica from Washington and Swenson from New York.
___
This story is part of “The AI Campaign,” an Associated Press series exploring the impact of artificial intelligence in the 2024 election cycle.
___ The Associated Press receives support from several private foundations to strengthen its explanatory coverage of elections and democracy, and from the Omidyar Network to support its coverage of artificial intelligence and its impact on society. The Associated Press is solely responsible for all content. For AP’s philanthropic engagement standards, a list of supporters and areas of coverage funded, visit AP.org.