AI Board

A board for sharing AI news and analysis.

AI Fighter Jets – Part 2: AlphaDogfight: The Turing Test of Aerial Combat

Author
김 경진
Date
2026-02-25 22:16
Views
77
Part 2: AlphaDogfight: The Turing Test of Aerial Combat

1. DARPA's Challenge: Can AI Beat a Human Pilot?

Somewhere inside the Pentagon building, in the fall of 2019, a question was posed. Could a machine defeat a human fighter pilot in the sky? This was no idle curiosity. The question, raised by the Defense Advanced Research Projects Agency—commonly known as DARPA—was as fundamental a challenge as when Alan Turing asked "Can machines think?" seventy years earlier.

DARPA is a peculiar organization. On the surface, it looks like a bureaucratic government agency, but in reality, it is closer to a playground for mad scientists. This is the place that created the internet. This is the place that gave the world GPS. They also planted the seeds of the stealth fighter. It would not be wrong to call it the only government agency that bets on the impossible and says it is okay to fail.

Now they threw down the gauntlet in aerial combat—a domain long governed by human instinct and intuition. The program called Air Combat Evolution, or ACE, marked the beginning. The program manager was Colonel Dan Javorsek. His callsign was "Animal." A former F-16 pilot, he understood the psychology of his fellow pilots better than anyone. What kind of people are fighter pilots? They are people who take pride in gripping the control stick and fighting the enemy face to face. Telling them to entrust their lives to a machine is close to an insult.

Colonel Animal resolved to break through this resistance head-on. He drew a historical analogy to explain the significance of the challenge. In 1939, U.S. Army Chief of Staff General George Marshall asked cavalry chief General John Herr how he would respond to Germany's blitzkrieg. General Herr's answer went like this: load the horses onto trailers, truck them to the front to save their energy, then overwhelm the tanks on the battlefield. History, of course, did not unfold as he imagined. The cavalry vanished, and tanks dominated the battlefield.

Animal warned: if today's fighter pilots do not want to become the cavalry of the twenty-first century, they must embrace AI as the new tank. But how could he break the pilots' distrust? Showing once is better than explaining a hundred times. Animal decided to demonstrate AI defeating a top human pilot in one-on-one aerial combat. That was the genesis of the AlphaDogfight Trials.

Why dogfighting—close-range aerial combat—of all things? Modern air combat is shifting toward launching missiles from beyond visual range, known as BVR engagement. So why choose a World War II-style close-quarters format? There were deep reasons.

First, dogfighting is a closed-world problem. Like the game of Go, the rules are clear but the possibilities are nearly infinite. This kind of environment is ideal for AI to build skills through reinforcement learning. DARPA saw dogfighting as a gateway to more complex aerial combat missions.

Second, dogfighting is the ultimate testing ground for the OODA loop. Observe, Orient, Decide, Act—this decision-making cycle plays out in split seconds during a dogfight. Surpassing a human here would prove that AI's computational speed and judgment have exceeded human physiological limits.

Third, it served as a starting point for building trust. Pilots hone their fundamentals through dogfighting from their earliest days in training. If AI could dominate in this most basic yet most instinctive domain, pilots would have no choice but to acknowledge its capabilities.

The Johns Hopkins Applied Physics Laboratory, or APL, played a central role in the competition. They built an AI arena called the Colosseum—named after the ancient Roman amphitheater where gladiators once fought. Combining JSBSim, an open-source flight dynamics engine, with middleware, autonomous algorithms, and visualization software developed by APL, the system could run simulations faster than real time. AI agents died and were reborn tens of millions of times in this virtual sky, learning to fight.

In August 2019, DARPA selected eight teams. Called the "Elite 8," they came from diverse backgrounds—defense giants like Lockheed Martin and Boeing, small AI specialists like Heron Systems and PhysicsAI, and university research teams like the Georgia Tech Research Institute. Their task was to develop algorithms that perfectly understood F-16 flight dynamics, executed basic combat maneuvers to get on the enemy's tail, and shot down opponents with guns. No missiles were allowed. Only aircraft performance and gunnery skill determined the outcome—the most primal form of combat.

The competition unfolded in three stages. The first trial in November 2019 was essentially an exhibition. At that point, the AIs could barely fly, crashing into the ground repeatedly. Researchers later recalled: "In the early stages, the AI didn't even know how to fly the airplane. It was a success if it just didn't crash." But within months, the situation changed dramatically.

By the second trial in January 2020, the AIs had begun mimicking basic tactics used by human pilots. The COVID-19 pandemic extended the program, adding a virtual Trial 2.5 in May. By the finals in August, the AIs were executing maneuvers beyond human imagination.

DARPA Deputy Director David Honey said in his opening remarks: "There were important questions that had to be answered when putting this program together. We need to understand whether AI autonomous algorithms can actually work in the very demanding environment of air-to-air combat."

The competition drew enormous attention. Approximately 10,000 people from 93 countries registered to watch, with an additional 5,000 requesting access. A distinctive feature of the AlphaDogfight Trials was its adoption of an esports format. The APL team created a "Control Zone" corner modeled after ESPN's SportsCenter. Experts in aerial combat and autonomy discussed the fundamentals of AI and dogfighting, training methods for AI and human pilots, and provided educational yet engaging analysis and commentary.

This was not a mere technology competition. It was the first official testing ground for the dominion of the sky—a dominion humanity had monopolized for ages. Could AI outpace humans in the OODA loop speed race? Could machine calculation triumph in a domain governed by human intuition? In August 2020, the world awaited the answer.

2. Heron Systems' Upset: An AI Armed with Reinforcement Learning

The overwhelming favorite was Lockheed Martin. The makers of the F-22 Raptor and F-35 Lightning II—the most formidable fighters in existence—knew more about fighter physics and air combat doctrine than anyone. Decades of accumulated aerodynamic expertise, thousands of engineers, astronomical research budgets. By any measure, they had the advantage.

Heron Systems, by contrast, was a small software company with roughly 30 employees. They had never built or flown a fighter jet. It seemed odd that this tiny Maryland-based company was even competing alongside defense giants. But they had a secret weapon: a fanatical devotion to deep reinforcement learning.

The principle of reinforcement learning is simple. Choose an action, observe the result, receive a reward or penalty, then choose a better action next time. Repeat this at insane speed. It is similar to how a toddler learns to walk—falling, getting up, falling again, getting up again. But AI differs from a toddler. It never tires, never gets frustrated, and can fall and rise millions of times a day.

Heron Systems' approach was entirely different from the established defense firms. Teams like Lockheed Martin and Aurora tried to inject fighter pilot knowledge into their AI—teaching rules like "get on their tail" or "conserve energy." This is known as an expert systems approach. Heron took a different path. They taught the AI nothing. Instead, they threw it into a virtual environment and let it kill and be killed over and over.

Heron's AI agent was called Falco, named after the falcon. Falco grew through self-play—endlessly fighting itself in virtual space. Yesterday's self developed new tactics to beat today's self, and tomorrow's self broke through those tactics in turn, in an infinite loop of improvement.

According to Heron's machine learning engineer Ben Bell, Falco fought billions of dogfights over approximately five weeks against a league of 102 different AI agents, accumulating over 4 billion simulation steps. Converted to human flight hours, this amounted to roughly 30 years of flight experience. Considering that a real pilot rarely exceeds 2,000 to 3,000 hours in a lifetime, this was a victory of compressed time transcending physical constraints.

A fascinating phenomenon emerged during this process. In the early learning stages, the AI tried to mimic the orthodox maneuvers taught by human instructors—managing energy, maintaining turn rates, textbook movements. But as learning progressed, the AI began abandoning human doctrine. Instead, it pursued extreme efficiency within the physics allowed by the simulation engine.

Human pilots try to control their aircraft smoothly to keep the enemy in sight. Heron's AI was different. It made constant, minute, rough corrections to the control surfaces—dozens of adjustments per second. Movements impossible for human hands. Through this, it achieved an incredible level of precise firing angles.

Behind Heron Systems' success were two sophisticated techniques: reward shaping and curriculum learning. Reward shaping provides the AI with more frequent feedback—not just rewarding a kill, but giving small rewards for gaining an advantageous position or closing in on the enemy's tail. Curriculum learning starts with easy opponents and gradually increases difficulty, like progressing from elementary to middle to high school.

The APL-developed enemy AI agents spanned a broad spectrum of complexity. The simplest, "Zombie," simulated a cruise missile in straight level flight. The basic agent "Logi" executed trivial altitude and speed changes. The scripted agent "BUD FSM" recognized engagement states and deployed predetermined responses. The most advanced reinforcement learning agent, "AlphaMav0," was developed entirely through self-play without human input, simulating an expert pilot.

On Day 3 of the competition, during the semifinals and finals, Heron Systems' AI unveiled a shocking tactic: the head-on gun shot. This meant charging straight at the opponent and firing the gun head-on. Human pilots consider this maneuver taboo due to collision risk—they typically do not fire beyond 135 degrees of aspect. The risk of midair collision is high, and debris from a destroyed aircraft could be ingested into one's engine. But for an AI without fear of death, this was the highest-probability path to victory.

In the semifinals against Aurora, Heron charged at terrifying speed toward the enemy's nose the moment combat began. The commentator, Glock, was stunned: "Heron shows zero hesitation. The instant the fight begins, it bares its fangs and goes straight for the opponent's nose."

In the finals against Lockheed Martin, Heron deployed this tactic to devastating effect. Lockheed's AI reflected human doctrine—managing energy, seeking positional advantage, moving with elegant orthodoxy. Heron's AI refused such gentlemanly combat. Like a rabid dog, it lunged at every opening. Its gunnery precision was extraordinary—the gun barrel seemed magnetically attached to the enemy. Once it locked on, it never let go. In fleeting instants of crossing, it delivered lethal strikes. At angles where human eyes would judge shooting impossible, the AI calculated a hit probability above 90 percent.

David defeated Goliath. Heron Systems crushed Lockheed Martin 16 wins to 4 losses in the finals, claiming the championship. A team of 30 had beaten a defense giant with tens of thousands of engineers. This demonstrated that data learning capability could matter more than domain knowledge. Methodology for training AI proved more decisive than experience building fighter jets.

Heron Systems' upset proved that AI can transcend mere human imitation and, through data and simulation, create new grammars of victory that humans never discovered. And now, one final opponent awaited: a human pilot.

3. Human vs. AI: 5-0

With the whole world watching via YouTube livestream, the main event began on the afternoon of August 20, 2020. The pilot representing humanity had the callsign "Banger." A graduate of the U.S. Air Force Weapons School with over 2,000 hours of F-16 flight time, he was a veteran among veterans. An active fighter pilot with the Washington D.C. Air National Guard, his full identity was not disclosed for operational security reasons, but he was one of the U.S. Air Force's top-tier pilots. He was not merely a skilled flyer—he was an instructor who had studied and taught countless tactics.

Banger put on a VR headset and sat in the simulator. The ADT VR system developed by the APL team was designed to provide the pilot with information matching what AI agents received. Beyond the traditional heads-up display, the VR headset presented special components to enhance situational awareness of threats and relative positions. It was a fair fight—at least in terms of information.

Over the preceding days, Banger had watched the AI matches and analyzed their patterns. Heron is strong in early head-on attacks. Avoid that zone, drag the fight long, and force an energy fight. That was the human strategy, born from thousands of hours of flight experience.

The match consisted of five rounds, starting from neutral conditions at various altitudes. Round 1 began with a neutral merge—both aircraft approaching head-on. A typical human pilot would engage in a turning fight to get on the other's tail. As expected, Banger tried to avoid Heron's head-on engagement, lowering altitude while turning. But Heron's reaction was far faster than anticipated. It snapped its nose around at inhuman speed, slicing into Banger's blind spot. After the merge, it reversed at a timing the human never expected, securing a firing solution. Banger was hit before he could even assess the situation. The first engagement ended in a flash—Heron's victory. The commentators were dumbfounded: "The AI's reaction speed completely destroyed the OODA loop." While the human was still observing and deciding, the AI had already decided and acted.

In Rounds 2 and 3, Banger changed tactics. He tried to exploit the fact that AI relies on perfect state information rather than visual cues, attempting aggressive maneuvers to escape AI's predictions—radical altitude changes, vertical maneuvers to confuse the AI. But Heron responded as if it knew Banger's moves in advance. Whatever Banger did, Falco reacted instantly—cutting inside Banger's turn radius in milliseconds or firing before Banger could establish a shooting position. Heron's gunfire struck Banger's aircraft without error.

Banger could not hide his bewilderment: "Standard fighter pilot training just doesn't work."

In Rounds 4 and 5, Banger attempted extreme low-altitude maneuvers, risking ground collision to lure the AI. He descended to 1,300 feet—roughly 400 meters—an altitude where humans feel the terror of ground impact. But for the AI, fear did not exist. In the fifth engagement, Banger used aggressive out-of-plane maneuvers to survive the initial merge and extend the fight. But Heron remained unshaken. It calmly maintained altitude, holding the advantageous position looking down from above, pursued relentlessly, and finally locked onto Banger's tail for the kill.

Final score: 5-0. A complete human defeat. Even more shocking was the substance of the fights—the human pilot never once landed an effective hit on the AI. All his time was spent in evasive maneuvers, and even those ultimately failed.

Banger emerged from the simulator, drenched in sweat, and gave a candid interview about his shocking experience. "The standard things you're trained to do as a fighter pilot didn't work," he explained. "We're doing complex physics in a dynamic environment. I'm trying to solve angles and various approach speeds, airspeed differentials, to get to a position where I can employ weapons on the enemy aircraft."

Regarding Heron's targeting ability, he was amazed: "Heron's ability to aim was superior to anything else." He explained that he had tried to evade the AI's fire through maneuvering, but the AI calculated even his micro-movements and maintained relentless aim. Its ability to adjust at the nanosecond level, combined with perfect state information between the two aircraft, enabled extraordinarily precise control.

Banger confessed that he was uncomfortable with the AI placing its aircraft in positions that could result in a collision. "The same goes for high-aspect gun shots. The AI can exploit that." The AI positioned its aircraft where humans would feel uncomfortable and fired from angles humans would never attempt.

Colonel Animal analyzed the results: "The AI program's ability to fly precisely provided the capability to ignore the flight safety rules that human Air Force pilots are trained on. That ultimately provided the advantage in simulated combat."

The message this 5-0 result delivered transcended technology. The premise that humans are supreme in aerial combat can be broken. That break comes not from an equipment gap but from a decision-making gap. Once broken, it is hard to reverse, because AI does not memorize tactics—it learns them.

The real shock the veteran felt was the limit of cognitive load. Humans expend enormous mental energy simultaneously performing flight control, enemy tracking, tactical judgment, and communications. AI processed all of this in parallel, without fatigue. What humans read by intuition, AI read by probability—and executed with ruthless consistency.

Banger was not shocked because the instincts he spent a lifetime developing became useless. He was shocked because those instincts were no longer a decisive advantage. Humans remain excellent. But the definition of excellence is changing.

Yet Banger did not despair. While acknowledging AI's capabilities, he left an important insight: "This is not the end. If we can make this technology work for us, we will be an invincible team on the battlefield." Glock, the commentator, agreed: "We trust what works." Rather than fearing AI's capabilities, pilots began thinking: how reassuring it would be to have such an AI as a wingman.

The match garnered over 440,000 views and was recognized as a game-changer by the aerospace community and the Pentagon. Dr. Tim Grayson, DARPA's Director of Strategic Technology, said: "These results show great promise for future air combat systems and concepts involving human-machine symbiosis."

The AlphaDogfight Trials rewrote the history of aerial combat. They proved that AI can be not merely an auxiliary tool but a lethal combatant surpassing humans. But they also revealed limitations—AI's strength existed only within the controlled environment of simulation and under conditions of perfect information.

The 5-0 scoreboard was not just a number. It was a herald of the coming era and a solemn warning to human pilots. Going forward, a pilot's value will shift from who flies better to who supervises and designs better. AI performs the maneuvers; humans design the missions, rules, and responsibilities. This is not a demotion—it is a shift in roles.

Now the task was to take this genius in a glass box out into the rough, unpredictable real sky. Could what was learned in simulation be replicated in reality? Could AI beat humans in a world without perfect information? The journey to find those answers was about to begin.

4. The ACE Program and X-62A VISTA: From Simulation to Real Skies

On a day in August 2020, aerospace experts around the world held their breath before their screens, watching Heron Systems' AI demolish a veteran human pilot 5-0. But the test pilots at Edwards Air Force Base shook their heads. "That's a video game."

Their skepticism was valid. The sky inside a simulator is always clean. Wind blows according to mathematical formulas, sensors never lie, and communication links never break. Most importantly, if you make a mistake, you can just hit the reset button. In that world, AI fights with perfect information—tracking the enemy's position, speed, and heading down to the millisecond.

The real battlefield is different. Sensors are flooded with noise like snowfall, communications are severed by enemy electronic warfare, and turbulence shakes the aircraft. Gravitational forces that do not exist in the virtual world crush the pilot's neck, and crashing means not game over but death.

DARPA moved to bridge that gap. The ACE program—Air Combat Evolution—launched in 2019 aimed beyond merely having AI fly aircraft. The goal was to develop the capability for human pilots to trust AI and fight together as a team. DARPA chose air combat as its "challenge problem." The reasoning: if trust could be built in the extreme speed and uncertainty of aerial combat, other domains would be easier to solve.

The ACE program comprised three phases. Phase 1 spent 18 months verifying and developing core capabilities in simulation environments—the AlphaDogfight Trials were part of this. Phase 2 spent 16 months conducting unmanned aircraft flight tests, transitioning algorithms to small drones. Phase 3 involved testing on full-scale combat aircraft.

At the heart of this plan sat a unique fighter jet: the X-62A VISTA—Variable In-flight Simulator Test Aircraft. Outwardly, it looks like an ordinary two-seat F-16D. But its interior is a completely different world. Built through a collaboration between Lockheed Martin's Skunk Works and Calspan, this eccentric aircraft can mimic the flight characteristics of other aircraft simply by changing software settings. Want the feel of an F-35? Set it. Want to replicate an MQ-9 drone's movements? Change the settings. Like a chameleon changing colors to match its environment, VISTA could transform its identity in the sky.

DARPA installed a system called SACS—System for Autonomous Control of Simulation—on this one-of-a-kind aircraft held by the U.S. Air Force Test Pilot School at Edwards. This interface allowed AI agents to directly access the aircraft's flight control computer, controlling ailerons, rudder, elevators, and engine thrust.

Safety measures were essential, of course. You cannot hand the controls of a multi-billion-dollar fighter to a newborn AI. The AI might bug out and nosedive into the ground or attempt maneuvers beyond the aircraft's structural limits. A human safety pilot sat in the rear seat, ready to immediately intervene and reclaim control authority if the AI attempted dangerous maneuvers. The flight control computer was programmed with a safety envelope that software-blocked the AI from exceeding 9G or diving toward the ground.

In December 2022, historic flights began over Edwards Air Force Base, California. From December 1 to 16, the X-62A performed 12 AI-controlled flights, spending over 17 hours in the sky. The code that had learned through tens of millions of trial-and-error iterations in simulators was now controlling real jet engine thrust, managing lift, and flying through physical skies.

The process was not entirely smooth. AI that was flawless in simulation trembled slightly when encountering real-world turbulence, and sensor data latency caused delayed reactions. Some days the AI was too passive; other days it tried maneuvers beyond the aircraft's limits, triggering the safety cutoff system.

But the true value of VISTA testing lay in rapid iteration. Normally, verifying new fighter software takes months—modifying code, ground testing, obtaining flight clearance, then flying again. The ACE team updated code daily through VISTA. If an AI error was discovered during a morning flight, engineers would fix the code over lunch, run simulations, and apply the fix to the afternoon flight. Test Pilot School instructors at Edwards testified that "corrections that would have taken a year in the past were resolved in a single day." Engineers could swap the autonomy algorithms loaded on the X-62A in just minutes, and pilots could test different teams' AIs within hours of each other.

In September 2023, the day finally came. Over Edwards Air Force Base, an AI-piloted X-62A and a human-piloted F-16 engaged in actual aerial combat. Imagine this: at 600 meters altitude, two fighter jets hurtle toward each other at a combined closing speed of 1,900 kilometers per hour. As the gap narrows to just 600 meters, the human pilot feels instinctive tension while the AI calculates optimal positioning without a tremor.

It started with defensive maneuvers but quickly escalated into aggressive maneuvering to get on each other's tail. The climax was a high-aspect, nose-to-nose engagement with the two aircraft charging head-on at each other.

This was not a scripted exercise. The two fighters were genuinely trying to shoot each other down, maneuvering ferociously. Human pilots sat in the X-62A's front and rear seats, but they did not touch the controls. They served only as safety monitors. The AI judged situations and controlled the aircraft without human intervention.

Over 21 test flights, the team modified more than 100,000 lines of flight-related software. The AI flew more precisely, operated at the edges of safety regulations, and reacted faster thanks to the computer's quicker Observe-Orient-Decide-Act loop. While the human pilot maneuvered for follow-up shots but did not attempt head-on or opportunity shots despite having chances, the AI fired as quickly as possible. This ultimately made the difference.

The U.S. Air Force did not publicly disclose the specific win-loss results, citing national security. However, testimony from participants suggests the AI's performance was remarkably stable. Chief Test Pilot Bill Gray evaluated: "The X-62A proved that non-deterministic AI can be safely applied to aerospace systems by solving the very complex problem of dogfighting."

From a pilot's perspective, the difference in feel was unmistakable. Fighting AI in a simulator, there is a lingering sense that the opponent is trapped within game rules. But in the X-62A, the AI is no longer pixels and vectors. It is a multi-ton mass of metal screaming past you, and the wake turbulence and closure angles it generates can stall your own aircraft. Speed arrives not as a number but as a shock, and G-force is felt not on screen but in your vertebrae.

DARPA ACE Program Manager Lieutenant Colonel Ryan Hefron emphasized: "In all domains, research only moves as fast as the tools allow. VISTA's recent upgrades have made it a far more effective testbed by enabling rapid integration and safe testing of AI-based autonomy. This has allowed us to accelerate real-scale flight testing of AI-based autonomy by at least one year."

The message left by the ACE program and X-62A VISTA experiments is clear. If AlphaDogfight showed the possibility, ACE created a verifiable reality. The Turing test of aerial combat is not merely about winning or losing—it is about whether the fight can be conducted while passing real-world safety and trust standards. The X-62A actually crossed that threshold. The genius from the simulator had been reborn as a warrior of the real world.

5. Secretary Kendall's Flight: Entrusting Weapons Release to an AI Pilot

The moment you sit in the cockpit, all debates suddenly fall silent. It is easy to argue about autonomy and ethics in a conference room. Close the canopy and taxi onto the runway, and those words regain their weight—because now it is not concepts but gravity that answers.

On May 2, 2024, a gentleman in his seventies stood on the scorching runway of Edwards Air Force Base, California. It was Frank Kendall, Secretary of the Air Force. He was no mere administrator. A former Army officer who served during the Cold War, he had spent his entire career as a defense technology engineer and acquisition specialist. He came here not simply to inspect, but to stake his own life by climbing into an AI-piloted fighter to determine the fate of his AI unmanned combat aircraft program.

The aircraft he boarded was the X-62A VISTA, which had already undergone numerous tests. Kendall sat in the front seat; a safety pilot occupied the rear. For this flight, Shield AI had loaded a reinforcement learning-based AI onto the aircraft. Shield AI was the company that had acquired Heron Systems in 2021—the very Heron Systems whose AI defeated a human pilot 5-0 at AlphaDogfight in 2020. The autonomous agent on board was a direct descendant of that championship AI.

The canopy closed and the engine ignited with a roar. Shortly after takeoff, Kendall witnessed a historic moment: he transferred control authority to the AI. "I had a button on the stick and basically initiated the automation," he later explained. As they entered the training airspace above Edwards, a manned F-16 playing the role of an adversary approached.

Kendall's X-62A immediately entered combat mode. The fight began, but neither Kendall nor the rear-seat safety pilot touched the stick or throttle. The aircraft aggressively banked and maneuvered on its own to get on the enemy's tail. The AI subjected Kendall to up to 5G—his body weight quintupled. No trivial burden for a man in his seventies. Under that pressure, the AI coldly calculated the enemy's projected path and continuously twisted the aircraft to seize advantageous positions.

Lightning-fast maneuvers at over 885 kilometers per hour followed. The two aircraft raced within roughly 300 meters of each other, meeting nearly head-on, twisting and rolling to force the other into a vulnerable position. Close proximity at altitude is nothing like interpersonal proximity. Two aircraft nose-to-nose within 300 meters, each waiting for the other's mistake—at this distance, a small error is immediate death.

The AI executed evasive maneuvers whenever the adversary's fire-control radar attempted to lock on, while seeking opportunities to counterattack. The human-piloted F-16 attacked aggressively, but the X-62A's AI gave up no openings. It counterattacked at timings the human could not predict.

After roughly an hour of intense flying, Kendall returned to base with a flushed expression. Climbing from the cockpit, he smiled. At the press conference that followed, he made a statement the world took notice of: "I saw enough during the flight that I'm confident I could trust the AI—which is still learning—to make the decision of whether to fire weapons in combat."

This statement immediately sparked controversy. Arms control experts and humanitarian organizations expressed deep concern about AI eventually dropping bombs autonomously without additional human consultation. The International Committee of the Red Cross warned: "There are widespread and serious concerns about leaving life-and-death decisions to sensors and software." But Kendall emphasized that there would always be human oversight in the system when weapons are employed. "Not having it is a security risk. At this point, we must have it."

From a pilot's perspective, weapons release is not a single button press. It is a chain of responsibility encompassing tactics, identification, rules of engagement, friendly positions, and weapon blast effects. In aerial combat, firing decisions often must be made within one second, but the judgment inputs packed into that second are messier than you might think. Sensors can be deceived, targets can employ countermeasures, datalinks can fail, and humans can misjudge. AI does not avoid misjudgment—it misjudges differently. Humans falter from fatigue and fear; AI can show strange confidence outside its training distribution. So entrusting weapons release to AI is not about whether AI is smarter, but about what safeguards can contain its modes of failure.

Kendall's demonstration flight was also a powerful message to Congress and defense budget officials. The Air Force had concluded it cannot counter China's numerical superiority with F-35s and next-generation fighters alone. The solution is a manned-unmanned teaming concept: one manned fighter leading a formation of three to five AI-controlled unmanned combat aircraft. The Air Force plans to operationally deploy at least 1,000 AI unmanned fighters by 2028. An F-35 or next-generation fighter would command multiple AI wingmen, delegating dangerous missions to AI while the human commands from behind.

Kendall added: "This is a transformational moment. The potential for autonomous air combat that we've only imagined for decades is now reality. In the near future, air forces will divide into two kinds: those that adopt this technology, and those that lose to them because they didn't."

Shield AI's Senior Vice President of Product, Brett Darcey, explained: "We have demonstrated the path from the lab to a real airplane—not just bringing the technology there, but also demonstrating our team's methods to support the necessary regulatory compliance, testing, and verification." According to him, the autonomy on board was not built for civilian use or for flying from point A to point B. It was built for something far more sophisticated: dynamically reasoning about where the enemy is, what the enemy is doing, and how to position the aircraft to address both safety and weapons effectiveness.

VISTA's military operators say no other country in the world possesses an AI jet like this. The software learns from millions of data points in the simulator, then tests its conclusions during real flight, and that real performance data feeds back into the simulator for further AI learning—a virtuous cycle. China has AI too, but there are no signs it has found a way to run such tests outside a simulator. Like a junior officer learning tactics for the first time, some lessons can only be learned in the air, VISTA's test pilots say.

Chief Test Pilot Bill Gray's words summarize it: "Until you actually fly it, it's all speculation. And the longer it takes to figure that out, the longer it takes to field a useful system."

Since its first AI-controlled dogfight flight in September 2023, VISTA has performed approximately 24 similar flights. But the program is learning so rapidly from each engagement that some AI versions tested on VISTA are already beating human pilots in air-to-air combat.

The pilots at this base know they may in some ways be training their own successors or shaping a future structure requiring fewer pilots. But they also say they would not want to face an adversary with AI-controlled aircraft in the sky if the United States does not have its own fleet. "We have to keep running. And we have to run fast," Kendall said.

To be blunt, the real message of this event is not "AI fires weapons." It is that the incentives for the system to evolve toward AI firing weapons are overwhelmingly strong. Speed, cost, survivability—the battlefield is becoming too fast for humans to run every decision loop, and there are scalability limits to humans directly controlling every weapons release in manned-unmanned formations.

The realistic picture going forward looks roughly like this: in peacetime and gray zones, human control is strengthened and AI operates primarily in recommendation and alert mode. In high-intensity conflict, humans set policies and red lines in advance, and AI's engagement authority expands under limited conditions. In the ultimate stage, the shift moves from "a human always pushes the button" to "humans design the conditions under which the button does not need to be pushed."

What Kendall saw from the cockpit was the first page of that shift. People may not be ready to read that page, but the sky has already turned to the next chapter.

If Chuck Yeager opened the supersonic era by breaking the sound barrier in the X-1 in 1947, then Frank Kendall opened the AI aerial combat era in the X-62A in 2024. It was the day a machine flew into the sky on its own without human help, dueled with a human, and received a passing grade from a human leader. The dominion of the sky is no longer humanity's alone. The age of algorithmic aces has lifted off the runway.

#DARPA_AlphaDogfight_Trials #AlphaDogfightTrials2020 #DARPA_ACE_Program #HeronSystemsAIDogfight #Heron_Systems_Reinforcement_Learning #AIAerialCombatSimulation #HumanVsAIPilot #AlphaDogfight5to0 #BangerPilotAIBattle #ReinforcementLearningFighterAI #X62A_VISTA_AutonomousFlight #X62A_AI_Dogfight_2023 #VISTA_F16_AIPilot #EdwardsAFBAITest #FrankKendallAIFighterFlight #Frank_Kendall_X62A #KendallSecretaryAutonomousWeapons #DARPA_ACE_Phase3 #AI_BFM_AerialManeuvers #DeepReinforcementLearningCombat #EpiSci_AI_Combat #Shield_AI_AutonomousDogfight #Calspan_X62A_VISTA #AIPilotWeaponsAuthority #AIAutonomousPilotTrust #SimulationToRealSkies #AIFighterTuringTest #UnmannedFighterAutonomousEngagement #AlphaDogfight_Finals_Results #AI_vs_TopGun_Pilot #AerialCombatAIAlgorithm #ReinforcementLearningMillionsOfVirtualBattles #AIF16FlightTest #ACEProgramOperationalDeployment #AutonomousAirCombatSystem #AIWingmanCombatExperiment #DARPAAutonomousFighterProject #AIAirToAirCombat #KimKyungjinAIFighter #KimKyungjinSkyDominator #KimKyungjinAIFighterBook #KimKyungjinAIPowerWar #KimKyungjinAIDefensePolicy #KimKyungjinLAWSAutonomousWeapons #KimKyungjinAIMilitaryTechBook #KimKyungjinAIPolicy #KimKyungjinAIAdminRevolution #KimKyungjinDefenseIndustryAI #KimKyungjinDroneWarfareFuture
Scroll to Top