Most of us that watched the Terminator movies have likely wondered when will the killer robots come after us. A recent Pentagon test pitting a fighter pilot against an artificial intelligence shows us that combat robots are on the horizon. The Defense Advanced Research Projects Agency (DARPA) held a competition between eight artificial intelligence systems and human fighter pilots recently.
The computer fighter pilot won five out of five encounters.
During the air combat maneuvering (ACM) courses I took, the emphasis was on perfecting the skills needed to survive an encounter with an enemy aircraft. ACM focused on how to position ourselves to control an attack. Although the idea was that we were to know the capabilities of our aircraft and what our bodies could withstand to counter an attack, it all boiled down to who was going to make the first mistake. Us, or the aggressor.
But computers do not make mistakes because they have no free will. Computers follow pre-programmed commands – even artificial intelligence – responding to each situation in milliseconds. They do not deviate from their programming even if they have the capability to adapt to an ever changing environment. This is because ACM techniques all come down to a pre-determined set of positions each aircraft has to be to get the upper hand.
In ACM, there are two limiting factors – the limits of the airframe and the limits of the human body. Each component limits what an air combat scenario is like. Assuming an equal capability in both aircraft, equally qualified pilots and equal pilot tolerances to g-forces the outcome squarely lies in the hands of the pilots. (This was before shoot-and-forget and over the horizon technology)
In other words, who makes the first mistake is the one likely to lose.
But what if one of the opponents doesn’t make mistakes?
I play chess on my tablet whenever there is some time to kill. I have set the chess App I use to its highest Elo rating. My Elo rating is no where near the highest level and thus each of my games against the computer ends in me losing the match. Sometimes it takes a few minutes and sometimes I can draw out a chess match to 30 minutes or more.
However, since the computer does not make mistakes the outcome is predictable. I lose each time.
On August 20, the Pentagon conducted an experiment where a U.S. Air Force pilot squared off against an artificial intelligence pilot. Both used an F-16 in the simulation.
In all five rounds, the computer defeated the pilot.
Although the Alpha Dogfights experiment showed that the computer can defeat the pilot, it does not mean that computers will suddenly be flying combat missions.
The test was limited and did not provide a real-world scenario.
But the Heron AI was noted for its accurate shooting and its aggressiveness in the combat experiment.
The Pentagon tests reinforces the notion that a computer does not make mistakes putting combat pilots in a disadvantage.
I fully expect fighter pilots to argue that artificial intelligence, even if perfected, will still lack creativity in ACM giving human pilots an advantage. However, I will disagree in that in a perfect matchup the computer will not make a mistake thus creativity becomes moot.
My losing chess matches prove this.