(TND) — The Federal Communications Commission announced the first steps aimed at bringing transparency to political ads featuring content generated by artificial intelligence.
FCC Chair Jessica Rosenworcel said she was concerned about the growing role of AI in creating “deepfakes” and other images, videos and audio recordings that could mislead voters. Ta.
“As artificial intelligence tools become more accessible, the European Commission wants to ensure that consumers are fully informed when the technology is used,” Rosenworcel said in a news release. Ta. “Today, I shared a proposal with colleagues that would make it clear that consumers have a right to know when AI tools are being used in the political ads they see. I hope so.”
The FCC is not proposing a ban on AI-generated content in political ads, only disclosure when such material is used.
Disclosures on air and in a station’s political file could apply to both candidates and the ads in question.
But this is very early in the process.
If adopted by the commission, additional steps, including public comment on the proposed rules, would have to be taken before anything is actually implemented.
“I think this is probably the right decision,” Daniel Schiff, a policy scientist and co-director of Purdue University’s Governance and Responsible AI Lab, said Thursday.
He called the FCC’s action “fairly light” and a reasonable rule to put in place.
The FCC leadership said that AI-generated content in political ads is worthy of attention and that it is worth getting public input on the issue.
In February, the FCC took action to make it illegal to use AI-generated voices in robocalls.
And on Thursday, the FCC proposed levying $6 million in fines against those involved in AI robocalls in New Hampshire and the company that delivered the calls.
Schiff said time is running out to get something in place before this fall’s presidential election.
However, this could have implications for future elections, with the hope that more mature regulations around AI-generated content will certainly be in place by the 2028 presidential election, if not sooner. He said there was.
Schiff said it was “absolutely essential” that regulators and lawmakers proactively address the use of AI in politics.
“This is the core of our government and society, and it already has many known challenges with electoral processes and public trust,” he said. “Thus, those with any authority or scope here need to understand these dramatic changes to our information environment and the possible and predictable uses and misuses of AI in the context of elections. ”
According to a recent Elon University poll, more than three-quarters of Americans expect the misuse of AI to affect the outcome of the 2024 presidential election, and 70% say the use of AI to generate false information, video or audio material is likely to influence the election.
- 73% of Americans believe it is “very” or “somewhat” likely that AI will manipulate social media to influence the outcome of the presidential election. For example, generating information from fake accounts or bots or distorting people’s impressions of a campaign. .
- 70% say the election is likely to be influenced by the use of AI to generate false information, videos and audio material.
- 62% of respondents said that AI could be used to target and influence elections in order to persuade some voters not to vote.
“They’re concerned about deepfakes and other ways that misinformation is further amplified in the election arena,” Lee Rainey, director of Elon University’s Center for Digital Futures Envisioning, told The National Desk’s Janae Bowens. told. “All they’re worried about is misinformation and disinformation and how it can be manipulated.”
Schiff said a national standard on AI content in political ads would be helpful, but that some states are stepping in to fill in the gaps.
He said he was impressed with the speed at which states have moved to enact legislation on the issue.
The Voting Rights Institute said it is tracking more than 100 bills in 39 state legislatures to regulate the potential for AI to generate disinformation about elections.
For example, the state of Wisconsin passed a law requiring disclaimers on the use of generative AI, according to the Voting Rights Institute. Failure to comply will result in a $1,000 fine for each violation.
According to AL.com, the Alabama Legislature has made it a crime to distribute “substantially deceptive media” to deceive voters.
“I think the regulatory patchwork is a real risk here,” Schiff said.
But he said much of the movement takes place at the local level, where state-level laws are viable.
The Department of Homeland Security warned state and local leaders to be on high alert for false content from domestic and foreign malicious actors.
“The strategy of many bad actors is to throw a bunch of spaghetti at the wall and see what sticks,” Rainey said. “What’s being spread, how people are upset about it, and are they potentially attacking our adversaries, all of which are important to understand how foreign actors are operating here. It’s a signal to show.”
Rainey said everyone should be skeptical of political content and fact-check it with multiple reliable sources, especially if something seems too formulaic.
Editor’s note: Janae Bowens from the National Desk contributed to this report.