A majority of American voters are skeptical of the argument that the United States should rush to develop more powerful artificial intelligence without being bound by domestic regulations in order to compete with China, according to a new poll published exclusively by TIME magazine.
The survey results show that U.S. voters do not agree with the common view advocated by the tech industry, which CEOs and lobbyists have repeatedly said requires the U.S. to approach AI regulation carefully to avoid giving geopolitical rivals an advantage. The survey also reveals a surprising degree of bipartisan agreement on AI policy, with both Republicans and Democrats in favor of the government placing some limits on AI development, prioritizing safety and national security.
The poll found that 75% of Democrats and 75% of Republicans believe “taking a carefully managed approach” to AI — preventing the release of tools that terrorists or foreign adversaries could use against the United States — is preferable to “pushing AI forward as quickly as possible so that we’re the first country to have extremely powerful AI.” Majorities of voters support stricter security practices by AI companies and are concerned about the risk of China stealing the most powerful models, the poll showed.
The poll was conducted in late June by the AI Policy Institute (AIPI), a US non-profit organization that advocates a “more cautious path” in AI development. According to the survey results, 50% of respondents believe that the US should use its advantage in the AI race to enforce “safety restrictions and aggressive testing requirements” to prevent any country from building a powerful AI system. In contrast, only 23% of respondents believe that the US should build powerful AI as quickly as possible to overtake China and gain a decisive advantage over Beijing.
Polls also suggest that voters may be broadly skeptical of “open source” AI, the idea that tech companies should be allowed to release the source code for their powerful AI models. Some technologists argue that open source AI will spur innovation and reduce the monopoly power of big tech companies. But others say it poses dangers as AI systems become more powerful and unpredictable.
“My sense from the polls is that stopping AI development is not considered an option,” says AIPI executive director Daniel Colson, “but it is also seen as risky to give the industry freedom. So a third path is desired. And when you present that third path in polls – relaxed AI development with guardrails – that’s what overwhelmingly people want.”
The survey also found that 63% of American voters believe it should be illegal to export powerful AI models to potential US adversaries like China, including 73% of Republicans and 59% of Democrats. Just 14% of voters disagree.
The survey interviewed 1,040 Americans across education levels, gender, race and party affiliation in the 2020 presidential election, and has a margin of error of 3.4 percentage points both ways.
So far, there is no comprehensive AI regulation in the United States, and the White House has encouraged government agencies to self-regulate AI technologies within their existing authority. However, this strategy appears to be in jeopardy due to recent Supreme Court decisions that limit the ability of federal agencies to apply broad rules set by Congress to specific or new situations.
“Congress has been so slow to act, so when it comes to AI policy, there’s been a lot of interest in being able to delegate authority to existing or new agencies to make the government more responsive,” Colson said. [ruling] It definitely gets harder.”
Even though federal AI legislation is unlikely anytime soon, let alone before the 2024 election, recent polls by AIPI and others suggest that voters are less polarized on AI than on other issues facing the country. A previous AIPI poll found that 75% of Democrats and 80% of Republicans believe U.S. AI policy should try to prevent AI from rapidly reaching superhuman capabilities. The poll also found that 83% of Americans believe AI could accidentally cause a catastrophic event, and 82% would like to slow AI development because of that risk, compared with just 8% who want to accelerate it.