The United States began testing the application of AI to air combat earlier than China. While China was still conducting live air combat between human-controlled and AI-controlled drones, U.S. test pilots were already taking to the skies to test air combat AI.
Popular AI technologies, such as deep reinforcement learning and large-scale language models, operate like black boxes. That is, tasks come in from one end and results emerge from the other, but humans are left in the dark about the inner workings.
But air combat is a matter of life and death. In the near future, a pilot will have to work closely with his AI, and in some cases even entrust his life to these intelligent machines. The “black box” problem not only undermines people’s trust in machines, but also prevents deep communication between machines.
Developed by a team led by Zhang Dong, associate professor of aviation at Northwestern Technological University, the new AI combat system can use words, data, and even charts to explain each instruction sent to flight controllers. can.
This AI can also clarify the importance of each command regarding the current combat situation, the specific flight operations involved, and the tactical intent behind it.
Chan’s team found that this technology opens a new window for human pilots to interact with AI.
Zhang’s team found that this type of AI, which can communicate “heartily” with humans, can achieve a nearly 100 percent win rate after just about 20,000 combat training sessions. In contrast, traditional “black box” AI only achieves a 90% win rate after 50,000 rounds and struggles to improve further.
Currently, Zhang’s team is applying the technology only to ground simulators, but future applications will be “extended to more realistic air combat environments,” they wrote in the Chinese journal Acta Aeronautica et Astronautica. This is stated in a peer-reviewed paper published in Sinica. April 12.
In the United States, “black box” issues have been cited in the past as causing problems for pilots.
“The big challenge I’m trying to address here at DARPA is how to build and maintain trust management in these systems, which are traditionally thought of as unaccountable black boxes. ,” said program director Col. Dan Javorsek. A manager at DARPA’s Strategic Technology Office said in an interview with National Defense Magazine in 2021.
DARPA has adopted two strategies to help pilots overcome their fears of “black boxes.” In one approach, the AI initially handles simpler, lower-level tasks, such as automatically selecting the best weapon based on the attributes of a locked target, and the pilot simply presses a button once. This will allow you to take off.
Another option is for senior military officers to personally fly an AI-powered fighter jet to demonstrate confidence and determination.
“There’s a security risk not having this. At this point, we have to have it,” Kendall told The Associated Press.
However, according to Zhang’s team’s paper, the Chinese military has conducted rigorous evaluations of the safety and reliability of AI and insists on integrating it into fighter jets only after solving the mystery of the “black box.” ing.
Deep reinforcement learning models often produce a large number of decision-making results that are mysterious to humans, but they demonstrate superior combat effectiveness in real-world applications. It is difficult for humans to understand and extrapolate this decision framework based on existing experience.
“This raises trust issues regarding AI decisions,” Zhang et al. write.
“Decoding the ‘black box models’ that allow humans to identify the strategic decision-making process, understand a drone’s operational intent, and trust its operational decisions is central to the engineering application of AI technology in air combat. “This also highlights the main objective of our research progress,” they said.
Zhang’s team demonstrated the impressive capabilities of this AI through multiple examples in their research. For example, in a losing scenario, the AI initially intends to climb and perform a Cobra maneuver, then engage the enemy aircraft through a series of combat turns, aileron rolls, and loops, and finally swoop down. It culminates in evasive maneuvers such as and horizontal flight.
However, experienced pilots could quickly discern the flaws in this radical maneuver combination. The AI’s continuous climbs, combat turns, aileron rolls and dives caused the drone’s speed to drop rapidly during the engagement, ultimately failing to shake off the enemy.
And, as stated in the paper, here’s what the humans told the AI to do: “Slowing down due to continuous rapid maneuvers was the cause of losses in this air battle, and such decisions must be avoided in the future.”
In another round, human pilots typically employ methods such as flank attacks to find effective positions to destroy enemy aircraft, while the AI uses large maneuvers to guide the enemy. and entered the sidewinding phase early on, using level flight. In the final stage, you deceive the enemy and achieve a decisive victory with a sudden big operation.
After analyzing the AI’s intentions, researchers discovered a sophisticated strategy that played a crucial role in the impasse.
The AI ”employs leveling and turning tactics, luring the enemy into performing a radical change of direction while maintaining speed and altitude, depleting residual kinetic energy and creating a path to loop maneuver for subsequent counterattacks.” ,” Zhang’s team wrote. .
However, the US sanctions do not appear to have had any appreciable impact on interactions between Zhang’s team and international teams. They leveraged new algorithms shared by American scientists at global conferences and also published innovative algorithms and frameworks in papers.
Some military experts believe that the Chinese military has a strong interest in organizing. Kansai – Connection between AI and human fighters – than in US fighters.
For example, China’s J-20 stealth fighter jet boasts a two-seat design, with one pilot dedicated to communicating with AI-controlled unmanned aircraft; There isn’t.
But the new technology could blur the line between humans and machines, said a Beijing-based physicist who requested anonymity due to the sensitivity of the issue.
“It could open Pandora’s box,” he says.