WASHINGTON — Two Air Force fighter jets recently collided in a dogfight over California. One plane was flown by a pilot. The other wasn’t.
The second jet was piloted by artificial intelligence and had the Air Force’s highest-ranking civilian passenger in the front seat. It was the ultimate display of how far the Air Force has come in developing technology that has its roots in the 1950s. But that’s just a hint of the technology to come.
The United States is racing to stay ahead of China when it comes to AI and its use in weapons systems. The focus on AI has raised public concerns that future wars will be fought by machines that select and attack targets without direct human intervention. Officials say that will never happen, at least on the U.S. side. But there are questions about what potential adversaries will tolerate, and the military believes it has no choice but to rapidly deploy U.S. capabilities.
“Whether you want to call this a competition or not, it certainly is,” said Adm. Christopher Grady, vice chairman of the Joint Chiefs of Staff. “We both recognize that this will be a very important element on the future battlefield. China is working on this issue as passionately as we are.”
Let’s take a look at the history of military development of AI, what technologies will emerge in the future, and how they will be controlled.
From machine learning to autonomy
AI has its roots in the military and is actually a combination of machine learning and autonomy. Machine learning occurs when a computer analyzes data and a set of rules to reach a conclusion. Autonomy occurs when these conclusions are applied and actions are performed without additional human input.
This appeared in its earliest form in the 1960s and 1970s with the development of the Navy’s Aegis missile defense system. Through a series of human-programmed if/then rulesets, Aegis is trained to autonomously detect and intercept incoming missiles faster than humans. However, the Aegis system was not designed to learn from its decisions, and its reactions were limited to the ruleset it had.
“If a system uses ‘if/then,’ it’s probably not machine learning. Machine learning is a field of AI that involves creating systems that learn from data,” said Air Force Lt. Col., assigned to the Massachusetts Institute of Technology. Christopher Berardi said. Providing technology to support the Air Force’s AI development.
AI took a big step forward in 2012, when the combination of big data and advanced computing power enabled computers to analyze information and write rule sets themselves. This is what AI experts call the “big bang” of AI.
Artificial intelligence is new data created by computers writing rules. Systems can be programmed to operate autonomously based on conclusions drawn from machine-generated rules. This is a type of autonomy with AI.
Testing AI alternatives to GPS navigation
Air Force Secretary Frank Kendall got a taste of advanced combat this month when he flew the first AI-controlled Vista F-16 fighter jet during an air combat training exercise over Edwards Air Force Base in California.
While this jet is the most visible sign of ongoing AI development, there are hundreds of AI projects underway across the Department of Defense.
At MIT, military personnel are working to erase thousands of hours of pilots’ recorded conversations to create a data set from the mass of messages exchanged between flight crews and air operations centers during flights, and AI We’ve made it possible for you to learn the differences between important messages such as runway closures. And the daily cockpit chatter. The goal was to have the AI learn which messages should be promoted so the controller could recognize them faster.
In another important project, the military is working on developing AI to replace navigation that relies on GPS satellites.
In future wars, high-value GPS satellites could be hit or jammed. Losing GPS could cripple U.S. communications, navigation, and banking systems, and prevent U.S. military aircraft and warships from coordinating a coordinated response.
So last year, the Air Force flew an AI program on a laptop strapped to the floor of a C-17 military cargo plane to work on an alternative solution that harnesses the Earth’s magnetic field.
It was known that aircraft could navigate according to the Earth’s magnetic field, but so far it has not been practical. Because each aircraft generates so much unique electromagnetic noise, there was no good way to filter just the emissions from Earth.
“Magnetometers are very sensitive,” said Col. Garry Floyd, director of the Department of the Air Force and the MIT Artificial Intelligence Accelerator Program. “If you turn on the C-17’s strobe lights, you’ll see it.”
Through flight and large amounts of data, the AI learned which signals to ignore and which to follow, and the results were “very impressive,” Floyd said. “We’re talking about tactical airdrop quality.”
“We think we may have added an arrow of things we can do in case we end up operating in a GPS-denied environment. We will do that,” Floyd said.
So far, AI has only been tested on the C-17. Other aircraft will also be tested, and if they work, they could give the military another way to operate if GPS goes down.
Safety rail and pilot’s voice
The Vista, an AI-controlled F-16, is equipped with significant safety rails for Air Force training. There are mechanical limits that prevent the still-learning AI from performing maneuvers that would endanger the plane. There is also a safety he pilot who can take over control from the AI with the push of a button.
The algorithms cannot learn during flight, so each flight only retains the data and rule set created on the previous flight. Once a new flight is completed, the algorithm is transferred to the simulator, where it is fed with new data collected during the flight and learns, creating a new set of rules and improving performance.
But AI is learning rapidly. Thanks to the supercomputing speed that AI uses to analyze data and fly new rulesets in simulators, the speed at which it finds the most efficient ways to fly and maneuver is already improving in some air combat exercises. Beating human pilots.
But safety remains a major concern, and officials say the most important way to consider safety is by controlling what data is reinserted into the simulator for the AI to learn. Stated. For jets, we make sure the data reflects safe flight. Ultimately, the Air Force hopes that the version of the AI it is developing will serve as the brains for 1,000 unmanned fighter jets being developed by General Atomics and Anduril.
In an experiment to train an AI on how pilots communicate, military personnel assigned to MIT cleaned up the recordings to remove classified information and the pilots’ sometimes salty language.
Learning how pilots communicate “reflects command and control and the pilot’s mindset,” said Grady, vice chairman of the Joint Chiefs of Staff. They need to understand. They don’t need to learn how to swear.”