“The United States is applying artificial intelligence to its weapons systems as quickly and as widely as possible, which poses additional risks to the world,” a senior PLA official said, speaking on condition of anonymity at a national security conference in Singapore.
“If the United States were to use artificial intelligence in its nuclear weapons systems, what would be the consequences? This should grab the world’s attention.”
The PLA officer also outlined Beijing’s efforts to manage the risks posed by the technology, both through the United Nations and through Beijing’s own proposals in the Global AI Governance Initiative launched last year.
The United States is also seeking to take the lead through the Political Declaration on Responsible Military Uses of AI and Autonomy, which has been signed by more than 50 countries, excluding China.
The technology is already being used on the battlefields of the conflicts in Gaza and Ukraine.
Chao Tong, a senior fellow in the Nuclear Policy Program at the Carnegie Endowment for International Peace, said the U.S. and China had to overcome a series of obstacles to address the issue, but “the fundamental obstacle is an increasingly competitive bilateral relationship.”
The two countries held their first talks on AI in Geneva in early May, with US officials expressing concern about China’s “misuse of AI” and Beijing accusing Washington of “restrictions and repression.”
Zhao said Beijing is particularly hesitant to restrict the development of military AI because it could be used in a future conflict with Washington.
He added that the U.S.-led declaration has “limited appeal” in China given China’s widespread opposition to what it sees as Western frameworks, including the rules-based international order.
In early May, State Department arms control official Paul Dean said in an online briefing that the United States had made a very “clear and strong” commitment that decisions to deploy nuclear weapons would be made solely by humans, not artificial intelligence, and he called on China and Russia to make similar statements.
So far the two sides are not known to have held specific discussions about military uses of AI, but the Geneva talks, which were not attended by military representatives, addressed the broader risks of AI technology.
“Military AI is certainly an important topic, but it just adds another dimension to an existing set of U.S.-China security concerns, some of which appear more urgent than others,” said Sam Bresnick, a research fellow at Georgetown University’s Center for Security and Emerging Technologies.
He said obstacles to an agreement regulating military uses of AI include “a lack of bilateral trust” and “concerns about leaks of information about capabilities…or a desire not to restrict the development and deployment of AI-enabled military systems at a time when related technologies appear to be developing more rapidly.”
Senior Colonel Zhu Qichao, deputy director of the Institute of Strategic Research on Defense Science and Technology, a think tank at the National University of Defense Science and Technology, recently accused the United States of being “double-dealing” in the debate over AI.
He told the nationalist newspaper Global Times that he was simply seeking to discuss the issue with China to learn more about China’s capabilities.
“I am deeply concerned about the unrestrained use of new technologies on the battlefield,” Rob Bauer, chairman of NATO’s military affairs committee, said during a panel discussion at the Shangri-La Forum. “Technology is increasing our destructive power while our ability to regulate it is rapidly diminishing.”
After two world wars, he said, there was a belief around the world that major conflicts between great powers should never again be fought on the battlefield, and that weapons systems needed to be regulated and controlled.
“If there was a seismic shift in power and the world split into several parallel systems with different rules, could they coexist?” he added.