The United States and China recently held their first official dialogue on the risks of artificial intelligence, but while it is a step in the right direction, the talks are unlikely to resolve tensions between the two countries over the development and deployment of AI-enabled military systems.
As China develops into a science and technology superpower, former U.S. government officials and defense industry officials are warning that Washington risks falling behind, or has already fallen irreparably behind Beijing, in the race to develop and deploy AI-enabled military systems.
Adding to simmering anxieties in the U.S. are fears that China does not have adequate testing and evaluation protocols to ensure responsible use and development of AI. Some U.S. observers worry that Beijing is taking a lax approach to guarding against AI mishaps, which could lead to devastating consequences in both the civilian and military domains.
Yet there is little public data on the current state of the bilateral contest over the development and deployment of military AI. While U.S. intelligence analysts have combed secret channels for evidence of Chinese advances and think tank experts have analyzed the impact of export controls on China’s military technology apparatus, few observers have looked for clues about China’s military AI capabilities in the writings of Chinese experts themselves.
In a recent report from Georgetown University’s Center for Security and Emerging Technologies, which reviewed 59 Chinese-language academic papers written by China experts, I outline several technical challenges that I believe China faces in integrating AI into its military systems.
The papers I reviewed were written by a variety of experts, including those affiliated with the PLA and those working for companies in China’s military-industrial complex. Moreover, the majority of the papers were published in journals controlled by universities affiliated with the PLA or key companies in China’s defense industry, such as China Aerospace Science and Technology Corporation and China Aviation Industry Corporation. The journals are published in Chinese and cover highly technical topics, meaning that the journals’ primary audience is other experts in China’s security agencies. As such, the journals are a useful source of information about Chinese analysts’ perceptions of China’s own military AI capabilities.
While military AI has become synonymous with lethal autonomous weapons systems in some quarters, the Chinese experts I reviewed cited challenges associated with the many components that make up an AI-enabled “kill chain” — the series of processes and decisions that go from threat identification to final targeting.
For example, experts point out that the PLA still struggles to collect, manage and analyze military-related data. Because China has not fought a war in over 40 years, Chinese analysts argue that the PLA is data-starved and relies on exercises to generate supplemental data resources. Further complicating the situation, scholars argue that China’s military data is often recorded manually and is poorly digitized. “It’s mostly kept in paper files,” explained two analysts at the Dalian Naval Academy. Finally, some experts point out that the PLA’s data resources are fragmented, making it difficult for different services, armies and units to access each other’s data.
Experts also cited challenges such as developing cutting-edge sensors that can gather battlefield information and building low-latency, high-bandwidth communications links with enough capacity to transmit sensor-generated data for AI-powered analysis to inform decision-making.
But their concerns don’t end there: analysts explain that the computer networks on which the algorithms are stored remain vulnerable to cyberattacks. Because cyber intrusions are difficult to detect, experts say, the military may not trust AI systems because adversaries could tamper with the algorithms or change the data, compromising them.
Finally, Chinese analysts have outlined issues related to testing and evaluation (T&E) of AI-powered military systems and the development of military standards. With regard to testing, some Chinese experts argue that Beijing lacks the necessary T&E practices to ensure AI systems work as designed. Others argue that insufficiently tested systems could lead to accidents and other safety issues.
Standards are important because they ensure that systems developed by different companies can properly communicate and work with each other. Without such standards, the PLA could be left with AI systems that are not fully interoperable, limiting their effectiveness in future wars. For example, a scholar at the China Institute of Shipbuilding Industry and Systems Engineering said, “Maritime unmanned devices are [moving toward] “Without piecemeal development…and without holistic design and functional integration, unmanned equipment will inevitably become fragmented and chaotic.”
These issues are similar to those likely facing the U.S. Department of Defense, including managing military data, modernizing data and communications networks, and ensuring the resilience and effectiveness of AI systems in future high-intensity warfare.
But that’s not the only concern shared by U.S. and Chinese experts about military AI use. In contrast to much of the U.S. debate about China’s views on AI risks, many Chinese defense experts are concerned about potential dangers arising from AI-enabled military systems.
Many experts argue that without the responsible use of fully trustworthy AI systems, it will be difficult to ensure the effectiveness of AI in the military domain, guarantee service members’ confidence in the technology, manage the risks of miscalculation and escalation, and maintain the security and integrity of AI-enabled military systems.
For example, some scholars argue that the use of autonomous weapons equipped with AI could lead to the outbreak or escalation of war. “If such weapons were used on a full scale on the battlefield, it could lead to an escalation of conflict and threaten strategic stability,” two experts from the National University of Defense Technology, under the Central Military Commission, wrote.
But some argue that challenges in ensuring the explainability and trustworthiness of AI-enabled systems will postpone the deployment of these systems “until the military is confident that AI systems are more trustworthy than existing systems.” The authors, who are from companies in China’s defense industrial base, argue that “the military does not trust AI-based systems beyond accomplishing specific tasks.”
The collection of articles does not conclusively indicate the PLA’s risk tolerance regarding the use of AI-enabled military systems, nor does it outline the conditions under which China would use AI in warfare. But the articles reveal detailed discussions of AI risks within the Chinese system.
While it may be naive to treat these arguments as absolute truths, they could influence internal deliberations in China’s opaque policy-making processes and perhaps even shape Beijing’s future official policies and guidelines on these issues.
For U.S. policymakers, these discussions provide evidence that parts of China’s military AI community are aware of and concerned about the reliability and responsible development and use of AI-enabled military systems. Understanding the discussions Chinese defense experts are having about AI risks could help U.S. officials identify and build on common ground regarding the responsible use of such systems. These shared concerns could provide a basis for future discussions and lead to bilateral cooperation to mitigate the risks surrounding the safe development and use of such systems.