AI Board

A board for sharing AI news and analysis.

AI Fighter Aircraft · AI Air Force

Author
Kyungjin Kim AI Researcher
Date
2026-03-15 14:16
Views
71

 


AI Fighter Aircraft · AI Air Force

Who Will Rule the Skies?

By Kyung-Jin Kim, AI Researcher

Preface: Is the Age of Top Gun Over?

May 2, 2024. Edwards Air Force Base, California. The midday sun blazes over the Mojave Desert.

An F-16 in orange and white livery revs its engine at the end of the runway. The roar of the Pratt & Whitney F100 turbofan engine is the very sound that has dominated American skies for decades.

Inside the canopy, in the front seat, sits Frank Kendall, the U.S. Secretary of the Air Force, suited up in a flight suit and helmet.

This is no ordinary test flight.

The one gripping the flight stick is not a human pilot. It is an algorithm etched onto silicon chips, an artificial intelligence trained through millions of simulated dogfights.

Secretary Kendall does not touch the controls. Neither does the safety pilot in the back seat. The aircraft kicks off the runway and soars into the sky. Soon, another F-16, this one flown by a human, appears. The two fighters begin charging toward each other at 900 kilometers per hour.

In 1986, the movie Top Gun shook theaters around the world.

Do you remember the scene where Tom Cruise, in his aviator sunglasses, gripped the stick and dueled his rival Iceman in the sky? Pete Mitchell, callsign Maverick. He was a reckless pilot who broke the rules, but in the end he won. Human intuition and guts, and above all, the beating heart behind the stick, decided the outcome. A fighter pilot was another name for a hero.

Thirty-six years later, in 2022, the sequel Top Gun: Maverick hit theaters.

In the opening scenes, a now gray-haired Maverick flies the hypersonic test aircraft Darkstar. But his project faces cancellation. Why? Because the budget is being redirected to an unmanned aircraft program. In the film, an admiral tells Maverick coldly: "Your kind is headed for extinction." Maverick pushes back. The movie unfolds as a story of human pilots completing the mission through sheer will. Audiences cheered. But reality had already moved in a different direction.

Two years before the film's release, in August 2020, a quiet competition was held at the Johns Hopkins University Applied Physics Laboratory near Washington, D.C. It was the AlphaDogfight Trials, hosted by DARPA, the Defense Advanced Research Projects Agency. AI systems developed by eight teams flew virtual F-16s against each other in a simulator. The AI that made it to the finals was an algorithm built by a small startup called Heron Systems. It defeated every other AI in the preliminaries.

Then the final opponent appeared. A human. An active-duty U.S. Air Force F-16 pilot, callsign "Banger." A veteran with over 2,000 hours of flight time. In the virtual cockpit, Banger took the stick.

AI versus human. The result was stunning. 5 to 0. A clean sweep for the AI. In five engagements, Banger was unable to land a single effective hit on the AI. Instead, he was shot down every time. The AI made decisions at speeds no human could match and maneuvered at angles no human could predict. After the event, Banger said: "Honestly, it was closer to awe than surprise."

It all happened inside a simulator. Virtual reality. Not the real sky. So some said: actual flight would be different. In simulation, there is no air resistance, no engine vibration, no human body crushed by G-forces. Reality, they said, is always more complicated.

Three years later, in September 2023, back at Edwards Air Force Base. DARPA's Air Combat Evolution (ACE) program team placed a modified F-16 called the X-62A VISTA on the runway. This fighter carried AI algorithms trained through tens of millions of simulated dogfights. And for the first time in history, an AI-piloted fighter engaged a human-piloted F-16 in actual aerial combat. At a combined closing speed of 1,900 kilometers per hour, the two aircraft came within 600 meters of each other. Nose-to-nose combat. A real dogfight.

Who won? The military did not officially say. National security reasons, they claimed. But that was beside the point. What mattered was the fact itself: AI could fly a real fighter jet and conduct aerial combat. The wall of simulation had crumbled. The last boundary between virtual and real had been torn down.

Then in May 2024, Secretary Kendall personally boarded that very aircraft. He flew for one hour. He experienced violent maneuvers that subjected his body to 5Gs of gravitational force at 885 kilometers per hour. He witnessed the AI-controlled jet twisting and turning in combat within 300 meters of a human-piloted F-16. He never touched the controls. When the canopy opened and Kendall climbed out, a smile spread across his face.

He told reporters: "Not having this technology is a risk to national security. We must have it now."

And he added one more thing. When asked whether he would trust AI with weapons release authority, the Secretary nodded. "I saw enough today. It can be trusted."

This is the reality we live in.

For thousands of years, humanity has sought to conquer the sky. In Greek mythology, Icarus flew toward the sun on wings held together with wax, only to plummet when he drew too close.

Thousands of years later, in 1903, the Wright brothers flew 36 meters in 12 seconds at Kitty Hawk. Just 66 years later, humans set foot on the Moon. On the battlefield, aircraft evolved from reconnaissance platforms to bombers, then to fighters. World War I biplane pilots shot at each other with pistols. In World War II, Mustangs and Spitfires sliced through European skies to clash with Messerschmitts. In the Korean War's MiG Alley, the jet age began. Over Vietnam, F-4 Phantoms chased enemy aircraft and fired missiles. In the Gulf War, the F-117 stealth fighter pierced the night sky over Baghdad.

In every one of those moments, there was one constant. Inside the cockpit, there was always a human being. A heart beating, sweat running, fear gripping, yet hands never leaving the stick. The fighter was an extension of the pilot. A fighter without a pilot was unthinkable.

But now, that equation is about to change.

Artificial intelligence does not tire. It knows no fear. It does not black out at 9Gs. It executes millions of calculations per second. In the time it takes a human pilot to blink, AI analyzes the situation, makes a decision, and executes the maneuver. The U.S. Air Force is already planning a fleet of over 1,000 AI unmanned combat aircraft. By 2028, the first Collaborative Combat Aircraft (CCA) will enter operations. Alongside human-piloted F-35 stealth fighters, AI-controlled drones will fly as wingmen. Loyal Wingmen. That is their name.

America is not alone. China is developing stealth drones like the GJ-11 Sharp Sword and the Dark Sword. Europe is pursuing two sixth-generation fighter programs: GCAP Tempest and FCAS. South Korea is building a Next-generation Air Combat System (NACS) around the KF-21 Boramae. Japan, Israel, and Turkey follow close behind. A new race for dominion over the skies has begun.

This book tells that story.

From the earliest days of unmanned flight, through the evolution of Predator and Reaper into battlefield assassins, the shock of AI defeating a human 5-0 in AlphaDogfight, to the first real-sky aerial combat between AI and a human pilot. It explains how reinforcement learning teaches a fighter to fly, how simulated experience transfers to reality, and how sensor data fuses to find the enemy. It covers wingman drones and swarm tactics, sixth-generation fighters and intelligent formations. And the unavoidable questions: Should machines decide who lives and who dies? Who should pull the trigger? When AI commits a friendly fire incident, who bears the responsibility?

Is the age of Top Gun truly over? The answer is not simple. Perhaps Maverick was right. Human judgment and intuition may still be irreplaceable on the battlefield. But the moment AI flew an F-16 against a human pilot over Edwards Air Force Base, something changed irreversibly. The question is no longer whether AI will replace humans. It is how we will work together.

Who will rule the skies? Join us on the journey to find the answer.

January 17, 2026

AI Researcher Kyung-Jin Kim

Part I: The Dawn and History of AI Fighter Aircraft

1. The Birth of Unmanned Aircraft: From Target Drones to Reconnaissance Drones

1935. On the deck of a British naval vessel. Gunners look skyward. A small radio-controlled biplane weaves through the clouds.

The DH.82B Queen Bee—this aircraft was born to be shot down. It was a target for gunnery practice.

No one sat in the cockpit. Only the buzzing hum of its engine echoed through the sky, and some likened the sound to the wings of a drone bee. That is how the word "drone" was born.

The history of unmanned aircraft began not in glory but in sacrifice. Being shot down was the mission. Anti-aircraft gunners needed something to shoot at to learn how to aim at moving targets, and they could not very well shoot at manned aircraft. So a machine that could fly without a pilot was built. A machine that did not need to come back. A machine whose loss would bring no tears. This was the first identity of the unmanned aerial vehicle.

When World War II erupted, demand for target drones exploded. In America, a man named Reginald Denny began fitting model aircraft with radio-control equipment and selling them to the military. The Radioplane OQ-2 became America's first mass-produced drone. Over 9,400 OQ-3 models were produced during the war and flew the skies over the Pacific and Europe. Most, of course, were shot down during training and fell into the sea or crashed into fields.

When the war ended and the shadow of the Cold War fell over the world, the fate of unmanned aircraft changed completely.

Drones were no longer just for training. Knowing what the enemy was doing became the key to survival.

At the time, reconnaissance aircraft flying over enemy territory carried pilots, and if they were shot down, the loss of the pilot brought enormous political crises.

On May 1, 1960, an American U-2 spy plane was shot down over the Soviet Union. Pilot Gary Powers was captured alive, and U.S.-Soviet relations turned icy cold.

This incident posed a single question to military strategists around the world: Is there a way to reconnoiter enemy territory without a pilot?

The answer already existed. You just had to flip the target drone around. Take the machine that was built to be shot down and make it come back alive. The U.S. Air Force converted the Ryan Firebee, a jet-powered target drone, into a reconnaissance platform. Codenamed Lightning Bug, these aircraft flew thousands of secret missions during the Vietnam War. They flew over North Vietnam's dense air defenses, took photographs, collected electronic signals, and sometimes served as decoys to exhaust enemy missiles.

Lightning Bugs flew along pre-programmed routes. Real-time control was impossible. Film could only be developed after the aircraft returned, and intelligence was always in the past tense. But the simple fact that no pilot was aboard was revolutionary enough. A shootdown no longer triggered a diplomatic crisis. No prisoner exchange negotiations. The age of machines dying in our place had begun.

The true leap came in 1982 in Lebanon's Bekaa Valley. The Israeli Air Force launched small drones called Scout and Mastiff to neutralize Syrian air defenses. These tiny propeller aircraft were hard to detect on radar, and crucially, they could transmit video in real time. The commander could watch the battlefield live from inside a container. The moment an enemy radar activated, waiting fighter jets immediately launched anti-radiation missiles. Nineteen Syrian surface-to-air missile batteries were destroyed in a single day.

This battle became the decisive turning point in the history of unmanned aircraft. The machine that began as a target had become the eyes of the battlefield. It no longer flew to be hit. It flew to see, to remember, and to report. Shocked by Israel's success, America quickly reverse-imported the technology, planting the seeds that would later grow into Predator and Reaper.

This is how the drone was born. From bullet sponge to battlefield sentinel. From expendable to strategic asset. The simple idea that a human need not be aboard changed the face of war.

2. Predator and Reaper: Drones Evolved into Sky Assassins

Three a.m. Creech Air Force Base, Nevada. Inside an air-conditioned shipping container, a pilot grips a joystick.

Before him, multiple monitors display a village in Afghanistan in infrared imagery, half a world away. People appear as white dots. One dot moves. It is the target. The pilot's finger rises to the button. "Fire." Three seconds later, the dot on the screen disappears. Six hours later, the pilot drives home and has dinner with his children.

This is the new landscape of war created by the MQ-1 Predator. A war in which the distance between seeing and killing has vanished. A war in which the pilot need not be on the battlefield. The Predator was not merely an unmanned aircraft. It was a machine that rewrote the grammar of war itself.

The Predator's roots began in the garage of Israeli-born engineer Abraham Karem. In the 1980s, he emigrated to America and designed a long-endurance drone called Amber using his own funds. His philosophy was clear: a drone must stay aloft for a long time and be reliable. Amber's design evolved through the GNAT-750 into the RQ-1 Predator in the mid-1990s.

Initially, the Predator was a pure reconnaissance aircraft. But after September 11, 2001, the CIA and Air Force strapped Hellfire anti-tank missiles to it. In November 2002, in Yemen, a Predator tracked an al-Qaeda senior leader's vehicle. A button was pushed at CIA headquarters in Langley, Virginia. The signal traveled via satellite. The Hellfire launched. The vehicle erupted in flames. It was the first drone strike in human history.

The Predator's designation changed from RQ-1 to MQ-1. The "R" stood for Reconnaissance; the "M" stood for Multi-role. That single letter marked the dividing line in the history of warfare. The drone was no longer a passive observer. It had become a predator that hunted on its own.

The Predator's success spawned a more powerful successor: the MQ-9 Reaper. The name itself means "the Grim Reaper." With a turboprop engine, the Reaper was twice as fast and carried fifteen times the payload. It could carry eight Hellfire missiles, GBU-12 laser-guided bombs, and GBU-38 JDAMs. Firepower rivaling a manned fighter.

The Reaper's true terror lay in its endurance. Fully armed, it could stay aloft for over 14 hours. With a full fuel load, up to 27 hours. This meant the enemy got no rest. The Reaper loitered over target areas all day, observing patterns of life. Who goes where. Who meets whom. When they sleep. Everything was recorded. And at some moment, when the opportunity arose, a missile flew.

Yet even the sky's assassin has weaknesses. The Reaper is slow. Its maximum speed is 480 km/h, but typical operating speed is just 370 km/h—a quarter of a jet fighter's speed. Its 20-meter wingspan makes it easy to detect on radar. Over 20 MQ-9 Reapers have been shot down by Yemen's Houthi rebels through 2024. Aircraft costing over $30 million each, brought down by missiles costing a fraction of that. This is the cost-asymmetry trap. In a full-scale war between major powers, the Reaper would struggle to survive.

So the U.S. Air Force is preparing the next generation. Stealthy, AI-autonomous drones that can complete missions even when communications are severed. Predator and Reaper were the bridge—the stepping stones from the age of remote control to the age of autonomous flight. They may have been the last drones flown by human hands.

3. The Pilot's Limits: The Physical Wall of G-Force and Cognitive Load

Altitude 5,000 meters. Inside an F-16 cockpit, the pilot begins a hard turn. Instantly, body weight increases ninefold. Eighty kilograms becomes 720 kilograms of pressure.

Blood drains from the head. The edges of vision darken. It feels like driving through a tunnel. The pilot clenches his teeth and tenses his abdomen. Hold your breath. Tense every muscle. Three seconds. Five seconds. Seven seconds. You must endure. Fail, and you lose consciousness. The aircraft plunges into the ground.

This is 9G. Nine times the force of Earth's gravity. The limit a modern fighter can withstand, and the limit a human can endure. And here lies the irony. What constrains the performance of a 21st-century cutting-edge fighter is not the engine, not the radar, not the missiles. It is the human sitting in the cockpit.

A fighter airframe can be designed to withstand 20G or more. But as long as a person is inside, 9G is the ceiling. The human heart fights a desperate battle to pump blood to the brain at 9Gs. And it often fails. When blood flow drops, a "grayout" sets in—vision turns gray. Next comes "blackout"—total darkness. If it continues, consciousness is lost. G-LOC: G-force induced Loss of Consciousness. When an unconscious pilot releases the stick, the aircraft cannot fly itself.

But G-force is only half the problem. The other half is inside the brain. Cognitive load. Modern air combat unfolds in a storm of information. AESA radar tracks dozens of targets simultaneously. Infrared sensors detect heat sources. Data links exchange information with friendly aircraft. The Radar Warning Receiver alerts to enemy radar locks. All this information floods the pilot's helmet display and instrument panels.

The human brain is not a parallel processor. It can focus on only one thing at a time. Multi-tasking is really just rapid switching. And switching takes time. 0.1 seconds. 0.2 seconds. In aerial combat, that time decides life and death.

John Boyd's OODA Loop explains the essence of air combat. Observe, Orient, Decide, Act. Whoever cycles through this loop faster wins. Beat your opponent by half a second and you live. But humans have a biological floor for reaction time that no amount of training can fully eliminate.

AI knows none of these limits. It does not feel G-forces. It does not tire. It has no fear. It processes thousands of sensor inputs in milliseconds. By the time a human thinks "Enemy!," the AI has already begun evasive maneuvers and is preparing to fire. It handles 15Gs without flinching. No ejection seat needed. No oxygen mask. No life-support system.

In the end, human limitations prove the necessity of AI fighters. The physical wall of G-force, the mental wall of cognitive load. These two walls are the product of thousands of years of human evolution, but they no longer match the speed of modern air combat. It is only a matter of time before pilots leave the cockpit. But they will not disappear entirely. Their role will change. From hands-on aviator to AI fleet commander. From training fingers to training minds.

4. Network-Centric Warfare: The Battlefield Connected by Data Links

Altitude 30,000 feet. The sun is sinking below the horizon. The sky outside the canopy deepens to navy blue. Then the Radar Warning Receiver emits a sharp beep. Someone is out there. Heart rate rises. Palms start to sweat. Where is the enemy? My radar screen shows nothing.

At that moment, a new symbol appears on my display. A target roughly 120 kilometers away. One I had not seen.

This information did not come from my radar. It came from an E-3 Sentry AWACS flying 200 kilometers behind me. The AWACS's powerful radar had detected the enemy, and the target's position, speed, and heading were transmitted to my fighter via data link. Now I could see what I could not see before. My eyes were no longer alone.

This is the essence of network-centric warfare.

In the history of war, information has always been as important as weapons—perhaps more so. Think of medieval scouts. They risked their lives infiltrating deep behind enemy lines to learn the enemy's movements. The problem was always the speed at which that information traveled. By the time a scout rode back to report, the battlefield had already changed.

In 1991, the Gulf War demonstrated a new paradigm. The U.S. and coalition forces achieved an overwhelming victory over Iraq. The secret was not just better fighters or more powerful missiles. The real difference lay in how information was shared.

The U.S. military connected every friendly platform on the battlefield through a tactical data link called Link-16. Fighters, bombers, ships, and ground forces were bound into a single vast network.

Think of it this way: if the old radio turned a human voice into radio waves, the data link is computers talking directly to computers. A pilot no longer needs to say "Hostile at three o'clock, 80 klicks, angels 20." Radar-detected information is digitized and transmitted simultaneously to every connected friendly asset. Latitude, longitude, altitude, speed, heading, and friend-or-foe identification—all transmitted in the blink of an eye.

In December 2024, a remarkable milestone was reached. Norwegian Air Force F-35s and P-8 maritime patrol aircraft successfully conducted Link-16 communication via space-based satellites. Until then, Link-16 operated only within line-of-sight range. Because the Earth is round, you could not communicate directly with allies beyond the horizon. But by using the U.S. Space Development Agency's low-Earth orbit satellite constellation as a relay, that limitation was shattered. Now, tactical information can be shared in real time with allies on the other side of the globe.

Part II: AlphaDogfight — The Turing Test of Aerial Combat

1. DARPA's Challenge: Why Dogfighting?

Spring 2019. A room in the Pentagon. Colonel Dan Javorsek, callsign "Animal," sat in DARPA's Strategic Technology Office. A former F-16 pilot with decades of flying experience, Animal was pursuing a single question: Can AI truly conduct aerial combat? And more importantly, can pilots be made to trust it?

Animal used a historical analogy. In the 1930s, a Polish cavalry commander, faced with the arrival of tanks, proposed loading horses onto trailers to conserve their energy, then unleashing them on the battlefield to outmaneuver the tanks.

History, of course, did not go his way. The cavalry vanished, and tanks dominated the battlefield. Animal warned: to avoid becoming the cavalry of the 21st century, today's fighter pilots must embrace AI—the new tank.

But how do you break through the distrust of pilots? Showing beats telling. Animal decided to demonstrate AI defeating the best human in a one-on-one dogfight. That was the genesis of the AlphaDogfight Trials.

Why dogfighting specifically? Modern air combat has shifted toward beyond-visual-range (BVR) engagements with missiles fired from distances where the enemy is invisible. So why choose a World War II-style close-range fight? There were deep reasons.

First, dogfighting is a closed-world problem. Like Go, the rules are clear but the number of possibilities is nearly infinite. This environment is ideal for AI to build skill through reinforcement learning. DARPA saw dogfighting as the gateway to more complex air combat missions.

Second, dogfighting is the ultimate test of the OODA Loop. Observe, Orient, Decide, Act—all happening in split seconds. Surpassing humans here proves that AI's computational speed and judgment have exceeded human biological limits.

Third, it was a starting point for building trust. Pilots hone their fundamentals through dogfighting from their earliest training days. If AI dominates in this most primal arena, pilots would have no choice but to accept it.

The Johns Hopkins Applied Physics Laboratory (APL) played the central role. They built an AI arena called the "Colosseum"—named after the ancient Roman amphitheater where gladiators fought. Combining the open-source flight dynamics software JSBSim with APL-developed middleware, autonomous algorithms, and visualization tools, the system could run simulations faster than real time. AI agents died and were reborn tens of millions of times in this virtual sky, learning to fight.

DARPA selected eight teams in August 2019. The "Elite Eight" included defense giants like Lockheed Martin and Boeing, small AI firms like Heron Systems and PhysicsAI, and university labs like Georgia Tech. Their task: develop algorithms that perfectly understood F-16 flight dynamics, performed basic combat maneuvers to get on the enemy's tail, and shot them down with guns only. No missiles. Pure airmanship and gunnery—the most primal form of combat.

The event adopted an esports tournament format. APL created a "Control Zone" modeled after ESPN's SportsCenter, with aerial combat experts providing educational yet engaging commentary. This was no mere tech demo. It was the first official proving ground for the question of whether AI could outperform human intuition in the sky.

2. Heron Systems' Upset: AI Armed with Reinforcement Learning

The heavy favorite was Lockheed Martin. Builders of the F-22 Raptor and F-35 Lightning II, they knew fighter physics and air combat doctrine better than anyone. Decades of aerodynamic know-how, thousands of engineers, astronomical research budgets. They appeared to have every advantage.

Heron Systems, by contrast, was a software company with roughly 30 employees. They had never built or flown a fighter jet. Based in Maryland, this small firm seemed out of place alongside defense industry titans. But they had a secret weapon: a fanatical devotion to deep reinforcement learning.

The principle of reinforcement learning is simple. Choose an action, observe the result, receive a reward or penalty, then choose a better action next time. Repeat at insane speed. Like a child learning to walk—falling, getting up, falling again, getting up again. Except AI never tires, never gets discouraged, and can fall and rise millions of times in a single day.

Heron's approach was radically different from established defense firms. While Lockheed and Aurora tried to inject pilot knowledge into their AI—rules like "get on his tail" and "manage your energy"—Heron taught its AI nothing. Instead, they dropped it into a virtual environment and let it die and kill endlessly.

Heron's AI agent was called Falco, named after the falcon. Falco grew through self-play, endlessly fighting copies of itself. Yesterday's self was the enemy today's self had to beat, and tomorrow's self would break today's winning strategy. This cycle repeated infinitely.

According to Heron's machine learning engineer Ben Bell, Falco spent roughly five weeks fighting a league of 102 different AI agents, engaging in billions of dogfights. Over four billion simulation steps. Translated into human flight hours, this equates to roughly 30 years of flying experience. Considering a real pilot rarely exceeds 2,000 to 3,000 hours in a lifetime, this was a victory of compressed time over physical time.

As learning progressed, something fascinating emerged. Early on, the AI mimicked textbook human maneuvers. But as it trained further, it abandoned human doctrine entirely. Instead, it pursued extreme efficiency within the physics engine's laws. It made micro-corrections to control surfaces dozens of times per second—movements impossible for human hands—achieving astonishing precision in its firing solutions.

In the end, David slew Goliath. Heron Systems defeated Lockheed Martin 16-4 in the finals and claimed the championship. A team of 30 beat a defense behemoth with tens of thousands of engineers. This proved that data-learning ability can matter more than domain expertise. The methodology of training AI was more decisive than decades of fighter-building experience.

3. Human vs. AI: The 5-0 Scoreline

With the world watching on YouTube livestream, the main event began on the afternoon of August 20, 2020.

The human representative's callsign was Banger. A graduate of the U.S. Air Force Weapons School, with over 2,000 hours in the F-16—a veteran among veterans. Not just a skilled aviator, but an instructor who had studied and taught countless tactics.

Banger donned a VR headset and took his seat in the simulator. APL's ADT VR system was designed to give the pilot information matching what the AI agent received. A fair fight—at least in terms of information.

Over the preceding days, Banger had watched the AI matches and analyzed their patterns. Heron is strong in early head-on attacks. Avoid that zone and drag the fight into a prolonged energy battle. That was the human strategy. A judgment drawn from thousands of hours of experience.

Round 1 began. The two aircraft started from a neutral merge, facing each other head-on. Banger tried to avoid Heron's frontal assault, lowering altitude and turning away.

But Heron's reaction was far faster than expected. At speeds the human eye could barely follow, it snapped its nose around and sliced into Banger's blind spot. In a timing the human never anticipated, it swung its nose to establish a firing solution. Banger was hit before he could even assess the situation. The first engagement ended in a flash: Heron's victory.

The commentators were stunned. "The AI's reaction speed completely destroyed the OODA Loop." While the human was still observing and orienting, the AI had already decided and acted.

In Rounds 2 and 3, Banger changed tactics. He tried to exploit the AI's reliance on perfect state information by making sharp, unpredictable maneuvers. But Heron responded as if it already knew what Banger would do. It reacted in milliseconds, cutting inside his turn radius or firing before he could even set up his shot.

In Rounds 4 and 5, Banger attempted extreme low-altitude maneuvers. He descended to 1,300 feet—about 400 meters—an altitude where fear of ground collision grips any human. But AI has no fear.

Final score: 5-0. A complete human defeat.

What was more shocking was the content. The human pilot never once landed an effective hit. He spent all his time in evasive maneuvers, and even that was not enough.

Banger emerged from the simulator drenched in sweat. He was candid about the experience: "The standard things you're trained to do as a fighter pilot didn't work." He marveled at Heron's aiming: "Heron's targeting was superior to anything else."

But Banger did not despair. He acknowledged AI's capabilities while offering a crucial insight: "This is not the end. If we can make this technology our ally, we'll be an invincible team on the battlefield."

The 5-0 scoreboard was not just a number. It was a warning about the coming era: the value of a pilot will shift from who flies better to who supervises and designs better. AI handles the maneuvers. Humans design the missions, rules, and accountability. This is not a demotion. It is a shift in role.

4. The ACE Program and X-62A VISTA: From Simulation to Real Sky

After the AlphaDogfight results, test pilots at Edwards Air Force Base shook their heads. "That was a video game," they said. Their skepticism was valid.

The sky inside a simulator is always clean. Wind blows by formula. Sensors never lie. Communication links never break. Most importantly, mistakes can be reset with a button. But the real world is different. Sensors are noisy. Communications are jammed. Turbulence shakes the airframe. A crash is not game over—it is death.

DARPA moved to bridge that gap. The ACE (Air Combat Evolution) program, launched in 2019, aimed not just to prove AI could fly, but to build the trust needed for human-AI teaming in combat.

At the heart of the plan was a unique aircraft: the X-62A VISTA. Externally it looks like a standard two-seat F-16D. But inside, it is a different world entirely. Built by Lockheed Martin's Skunk Works and Calspan, VISTA can mimic the flight characteristics of other aircraft simply by changing software settings.

In December 2022, historical flights began. From December 1 to 16, the X-62A conducted 12 AI-piloted flights totaling over 17 hours. The code that had learned through tens of millions of simulated trials was now controlling real jet thrust and aerodynamic surfaces in the physical sky.

In September 2023, the milestone arrived. Over Edwards Air Force Base, the AI-piloted X-62A engaged a human-piloted F-16 in actual aerial combat. Two fighters charged at each other at a combined closing speed of 1,900 km/h, coming within 600 meters. Real nose-to-nose combat.

In May 2024, Secretary Kendall personally boarded the X-62A. He sat in the back seat while AI flew the fighter through combat maneuvers at over 550 mph and 5Gs. Afterward, he declared: "I would be comfortable with AI having weapons release authority."

Safety systems were rigorous. A Runtime Assurance system monitored every AI command in real time, checking each instruction at the millisecond level. If an AI command threatened structural limits or a ground collision, the system would instantly cut AI control and return the aircraft to a safe state. A safety pilot in the rear seat could always override. In every test flight of the X-62A, the safety pilot never once had to hit the emergency switch.

Part III: How AI Learns to Fight

1. Reinforcement Learning: The Immortal Born from Millions of Virtual Deaths

When Heron Systems' AI crushed the human pilot 5-0 in the AlphaDogfight finals, many asked: how did this AI learn to fight? The answer lies in reinforcement learning.

Think of training a puppy. Sit on command, get a treat. Chew the furniture, get scolded. The puppy learns which behaviors are good through rewards and penalties. AI works the same way. Good actions earn rewards; bad actions earn penalties. The AI optimizes its behavior to maximize rewards.

But reinforcement learning for AI fighters is far more complex than dog training. The design of the reward function is everything. If you simply say "destroy the enemy," the AI might learn kamikaze tactics—charging straight in for mutual destruction.

So engineers must craft elaborate reward systems. Destroy the enemy but survive. Get behind the enemy's six o'clock for bonus points. Waste energy on reckless maneuvers, lose points. Enter a civilian zone, heavy penalty. Lockheed Martin's AlphaDogfight team designed their reward system with advice from retired F-16 pilots. Decades of pilot experience translated into equations and weights.

In the finals, Heron's AI performed a stunning maneuver: the head-on gunshot. Flying straight at the opponent and firing the cannon head-on. Human pilots instinctively avoid this due to collision risk. Training regulations forbid it. But the AI, through tens of millions of virtual deaths, had discovered on its own that "if I fire precisely 0.1 seconds before collision, I can destroy the enemy before I die." No human taught this tactic. The AI found the winning formula through infinite trial and error.

This process is called curriculum learning. You start easy—first, just maintain level flight. Then aim at a stationary target. Gradually face tougher opponents. At the final stage, fight yourself—or past versions of yourself. This is self-play, the same method AlphaGo used to conquer the game of Go.

Reinforcement learning begins with the freedom to fail. A real pilot's single failure means death. But a virtual AI dies tens of millions of times and is reborn as an immortal. The experience that humans accumulate over a generation, AI compresses into days. A being trained in a world where the laws of time are different descends into reality. That is why the AlphaDogfight AI overwhelmed a veteran pilot. Not because it calculates faster, but because it has accumulated an experience base no human lifetime could match.

2. Transferring Simulation Experience to Reality

If you take the AI that dominated in virtual combat and seat it in a real F-16, can it immediately rule the sky? The answer is no. The sky inside a simulation is mathematically perfect. Wind blows by formula, air density is uniform, the engine runs at 100 percent efficiency. Sensors produce clean data, and communications have zero latency. Reality is nothing like that. Gusts blow, clouds blind sensors, signals drop, and the airframe vibrates in unexplainable ways.

This gap is called the Reality Gap. The technology to bridge it is Sim2Real—Simulation to Reality.

The first Sim2Real strategy is to deliberately make the simulation messy. This technique is called Domain Randomization. During training, wind strength is randomized, the center of gravity shifts, engine thrust fluctuates between 80 and 120 percent, sensor noise is injected, and communication delays are introduced. An AI trained in this "dirty" environment perceives the real world as just "another noisy simulation."

The second strategy is to continuously feed real-world data back into the simulation. This is called Digital Twin. Real flight data is collected in real time to update the virtual model. Aerodynamic data, engine response times, vibration patterns—all used to refine the simulation. The virtual world gradually converges with reality.

The U.S. Air Force operates the X-62A VISTA experimental aircraft to validate Sim2Real. Since December 2022, AI-piloted flight tests have been conducted over Edwards Air Force Base. In 2023, the X-62A engaged a human-piloted F-16 in actual aerial combat—the world's first real-sky human-vs-AI dogfight.

Future AI fighters will know basic combat skills before takeoff, but evolve in real time during the fight by identifying enemy patterns on the spot. This is called meta-learning or online adaptation. Sim2Real is not simply installing software. It is the process of a virtual intelligence putting on a physical body and learning the laws of reality.

3. Sensor Fusion: Integrating Radar, EO/IR, and ESM Data

The secret to winning air combat is simple: see first, shoot first. But "seeing" is more complicated than it sounds. A human pilot's eyes can only see what is outside the canopy. Modern air combat is decided hundreds of kilometers away.

The problem is that no single sensor is perfect. Radar sends out radio waves and detects reflections—but stealth aircraft are designed to minimize radar returns, and using radar reveals your own position. EO/IR sensors detect heat from enemy engines—silent and effective even against stealth, but poor at measuring range and vulnerable to weather. ESM passively listens for enemy transmissions—revealing enemy type but useless when the enemy's radar is off.

Each sensor has simultaneous strengths and weaknesses. For a human pilot to integrate all this data in their head in real time is nearly impossible. This is where AI's true power emerges: sensor fusion. AI simultaneously analyzes and integrates data from radar, EO/IR, and ESM to create a single, unified situational picture.

The F-35 Lightning II is the embodiment of sensor fusion. Its AN/APG-81 AESA radar, AN/AAQ-40 EOTS, AN/AAQ-37 DAS (six infrared cameras covering 360 degrees), and AN/ASQ-239 electronic warfare suite all feed data to AI, which fuses it into a single integrated battlefield picture displayed on the pilot's helmet. Pilots call it the "God's Eye View."

Sensor fusion is the technology that lifts the fog of war. The scariest thing in combat is not the enemy—it is not knowing. AI's sensor fusion shines light into that darkness. That is the true power of fifth- and sixth-generation fighters. Not stealth. Fusion.

4. Target Detection, Tracking, and Identification: ATR and Multi-Target Tracking

Altitude 25,000 feet. The sky beyond the canopy is crystal blue. But beneath that calm lurks lethal tension. The Radar Warning Receiver begins chirping. Someone is watching me. Six green dots appear on the tactical display. Friend or foe? Or just passing civilian airliners? I cannot tell.

Old-school fighter pilots relied entirely on their own eyes and instincts for this moment. We called it the "Mk.1 Eyeball"—the human eye.

This is where ATR enters the picture. Automatic Target Recognition. Simply put, the machine finds the enemy on its own and tells you: "That's hostile" or "That's friendly." At the core of ATR is deep learning—AI that learns patterns from millions of images.

Imagine a drone swarm. Hundreds of small drones rushing in like a swarm of bees. Which is a kamikaze drone? Which is a mere decoy? Human eyes cannot tell. But AI analyzes each drone's flight pattern and delivers its verdict in under a second.

The law of air combat has not changed. First Look, First Shoot, First Kill. What has changed is speed. The human brain has biological limits on judgment speed. AI's computing speed operates not in milliseconds but microseconds. By the time a human thinks "Huh?," the AI has already identified the target and opened the missile seeker.

Yet humans do not become irrelevant. No matter how many targets a machine detects and classifies, the final decision remains with humans. What if the AI misidentifies a civilian airliner as hostile? Someone needs to say, "No, that's not the enemy." A hunter commanding a powerful hunting dog. That is the future combat pilot's role.

5. Explainable AI (XAI)

July 3, 1988. The Persian Gulf. In the combat information center of the USS Vincennes, tragedy began. An unknown track appeared on the radar screen. The computer system classified it as an Iranian F-14 fighter. The captain ordered a missile launch.

But it was not an F-14. It was Iran Air Flight 655, a civilian airliner. All 290 passengers and crew perished. Later investigation revealed the computer system had displayed incorrect information, and the crew, under extreme stress, had taken it at face value.

Why did this happen? The system simply said: "That is an F-14." It did not explain why it made that judgment, how confident it was, or whether alternatives existed. The crew had no way to verify the machine's word. This is the Black Box problem.

XAI—Explainable AI—was born to solve this. It is the technology that makes AI explain in human-understandable terms why it reached a particular conclusion.

Instead of "This is an enemy tank, 97% probability," XAI says: "This is a T-90 tank. Basis: First, turret shape matches T-90 database at 95%. Second, infrared engine heat distribution shows diesel characteristics. Third, escort vehicle formation matches Russian armored doctrine."

In April 2025, the U.S. Air Force published its AI doctrine document (AFDN 25-1), mandating "transparent and explainable algorithms" and "regular audits and evaluations." Trust calibration matters too: AI that is trusted too much leads to tragedies like the Vincennes. AI that is trusted too little gets turned off and wasted.

Intelligence without explanation is, on the battlefield, indistinguishable from madness. What we need is not a ghost in a black box, but a comrade we can trust with our lives. XAI is our effort to build that comrade.

Part IV: The Age of the Loyal Wingman

1. Manned-Unmanned Teaming (MUM-T): Human Leaders and AI Partners

In the cockpit of an F-16 Viper, I flew hundreds of sorties. On Wild Weasel missions charging toward enemy SAM sites, my Radar Warning Receiver screamed incessantly. When an infrared missile's smoke trail grazed past the canopy and 9Gs crushed my lungs and narrowed my vision, I was utterly alone. Even if I had a wingman, he was busy keeping himself alive, and every decision rested on my two hands and two eyes.

But now the war in the sky is transforming into something entirely different.

Manned-Unmanned Teaming, or MUM-T, is not simply placing a drone next to a manned aircraft. It is a revolution that changes the nature of combat itself. A single manned fighter teams with multiple AI drones to encircle and pressure the enemy like a wolf pack driving prey. The human pilot is no longer a lone gladiator. He becomes a commander in the sky.

On a Wild Weasel mission, to find an enemy air defense radar I had to get myself detected first. Only when the enemy locked onto me would their position be revealed, and then I could fire a HARM anti-radiation missile. In other words, I was the bait. In a MUM-T environment, I no longer need to be the bait. A drone goes in my place, activates its radar, and draws the enemy's attention. The moment the enemy turns on their fire-control radar against the drone, I detect their position from a safe distance and launch my missile.

2. Collaborative Combat Aircraft (CCA): A Fleet of 1,000 Unmanned Fighters

On August 27, 2025, General Atomics' YFQ-42A completed its maiden flight at a California test range. Just two months later, on October 31, Anduril's YFQ-44A "Fury" took to the skies. Less than two years from contract to first flight.

The U.S. Air Force selected Anduril and General Atomics for CCA Increment 1 development. The goal: acquire over 1,000 CCAs. The math is roughly 500 manned fighters, each accompanied by two unmanned wingmen. The Air Force plans to invest over $8.9 billion in this program from 2025 to 2029.

Each CCA is estimated to cost between $20.5 million and $27.5 million—about a quarter the price of an F-35. When the enemy fires a $100 million missile, America throws back a $20 million drone. This is the economics of war.

3. Boeing MQ-28 Ghost Bat and XQ-58A Valkyrie

The MQ-28 Ghost Bat, co-developed by Boeing and the Royal Australian Air Force, was the first unmanned combat aircraft to receive a name. At 11.7 meters long and capable of carrying 500 kilograms of payload, it flies in formation with F/A-18s and F-35s. As of 2025, it has completed over 100 test flights and will soon begin air-to-air missile firing trials.

Kratos's XQ-58A Valkyrie took a different approach. Launched by rocket booster and recovered by parachute, it needs no runway. At annual production rates of 250 to 500 units, the per-unit cost drops below $2 million.

4. Increment 1 CCAs: YFQ-42A and YFQ-44A Fury

General Atomics' YFQ-42A is based on its Gambit concept, evolving from the XQ-67A demonstrator design. Its V-tail, top-mounted engine inlet, and internal weapons bay provide stealth performance. Tricycle landing gear enables operations from unprepared runways.

Anduril's YFQ-44A took a different path. It uses a commercial business jet engine and mounts weapons on external hardpoints. Speed over stealth, cost reduction over signature management. Maximum altitude 50,000 feet. Mach 0.95—nearly the speed of sound. Capable of 9G maneuvers.

Anduril's Jason Levin stated: "From clean-sheet design to first flight took 556 days. Faster than any major fighter program in recent history."

These two aircraft represent the first door pushing unmanned fighters to the center of combat power. The pilot is no longer a lone hero; he is evolving into a command node. Victory will be decided not by a single stroke of genius but by dozens of rational sacrifices.

5. Shield AI's X-BAT: The Game-Changer of Runway-Free Air Combat

What is a runway to a fighter pilot? A lifeline. And also a grave. Because the enemy knows it too. When war begins, runways are the first things bombed. Long, flat, and clearly visible in satellite imagery. A few ballistic missiles turn a runway into swiss cheese. Without a runway, fighters cannot fly. Even the most expensive F-22 or F-35, stuck on the ground, is just scrap metal.

The company that set out to overturn this nightmare is Shield AI. Their weapon is the X-BAT.

On October 21, 2025, in Washington, D.C., Shield AI unveiled the X-BAT. A vertical-takeoff, vertical-landing stealth jet drone. No runway required.

Co-founder Brandon Tseng, a former Navy SEAL, declared: "Runway-free airpower is the holy grail of deterrence."

The X-BAT's engine is GE Aerospace's F110-GE-129—the same engine powering F-15s and F-16s. Range over 2,000 nautical miles. Maximum altitude 50,000 feet. Its brain is Hivemind, Shield AI's AI pilot software, already combat-proven through over 130 sorties in Ukraine since June 2024, locating Russian SA-11 Buk-M1 mobile air defense systems in GPS-jammed environments.

The X-BAT's strategic significance is profound. Existing fighters are chained to runways. X-BAT launches from trailers—forest clearings, remote islands, pitching ship decks, anywhere. The enemy can no longer think "just hit three air bases and we're done." Potential launch points multiply into the thousands.

Part V: The Global AI Fighter Development Race

At 30,000 feet, gazing through the canopy at the dark blue sky, a pilot feels solitude. As the oxygen mask heats your breath and the helmet presses down on your shoulders, you can trust only your own two eyes and trained instincts. But now, a new presence is intruding into that solitude. A companion made not of flesh and blood but of silicon and algorithms, one that never tires and knows no fear.

The skies of the world have become a silent battlefield. The competition among America, China, and Europe is not a mere technology race. It is a life-or-death gamble over who will command the skies for the next century.

1. The United States: DARPA ACE, VENOM, and F-22/F-35 CCA Integration

In the summer of 2020, an AI agent achieved a clean 5-0 victory over a veteran F-16 instructor at DARPA's AlphaDogfight Trials. Secretary Kendall put it bluntly: "When a human takes fractions of a second, AI reacts in microseconds. Orders of magnitude difference. And that difference decides the outcome."

DARPA's ACE program brought this technology from simulation into real skies. From December 2022 to September 2023, 21 test flights were conducted over Edwards Air Force Base. Over 100,000 lines of core flight software were modified. In September 2023, the X-62A VISTA, under AI control, engaged a human-piloted F-16 in actual aerial combat.

In May 2024, Secretary Kendall personally rode in the X-62A's back seat. He later compared the experience to "the Wright brothers' flight of 1903 or Chuck Yeager's breaking of the sound barrier in 1947."

Project VENOM joined the effort, converting six active-duty F-16s into "flying AI laboratories" to validate autonomous flight technology under real-world conditions.

All roads lead to the CCA—Collaborative Combat Aircraft. The concept: pair one expensive manned stealth fighter (F-22 or F-35) with two to three low-cost, mass-producible AI drones. The manned jet becomes the quarterback; the drones become receivers. In April 2024, the Air Force selected Anduril and General Atomics for CCA Increment 1. Each CCA costs roughly one-quarter the price of an F-35. The Air Force aims to field over 1,000 CCAs. When the enemy spends $100 million on a missile, America answers with a $20 million drone. Economics as warfare.

2. China: GJ-11 Sharp Sword, Dark Sword, and J-20 Formation Operations

Turn east, and the red dragon has been awake for some time. The People's Liberation Army Air Force is no longer a copycat of Western technology. Under Xi Jinping, China has introduced "Intelligentized Warfare"—a concept in which AI becomes the brain of war.

On November 11, 2025, the PLA released a stunning video: the GJ-11 stealth combat drone flying in formation with a J-20 stealth fighter and a J-16D electronic warfare aircraft. The new official codename for the GJ-11 was revealed: "Xuanlong"—the Mysterious Dragon.

China demonstrated to the world that it is operationally employing Manned-Unmanned Teaming. The formation—GJ-11 leading, J-20 and J-16D following—represented a tactic that even the United States has not yet fielded.

The Dark Sword represents an even more alarming concept: a supersonic air-combat drone. Most drones focus on subsonic reconnaissance or bombing. Dark Sword was designed for dogfighting. Since a human is not aboard, its AI can push to 15G or 20G—maneuvers that would render any human unconscious.

China's strategy is built on Military-Civil Fusion. AI capabilities from Baidu, Alibaba, and Tencent flow into defense. Data from 1.4 billion people feeds AI training. While the West debates "explainability" and "accountability," China focuses on utility.

3. Europe: GCAP Tempest and FCAS — Two Sixth-Generation Projects

Europe's skies have always been complicated. Airspace is dense, borders are short, and political memories are long. Nations that fought each other for centuries now must fly in the same formation. Consequently, Europe is walking two paths simultaneously: the UK-Italy-Japan GCAP (Global Combat Air Programme) and the France-Germany-Spain FCAS (Future Combat Air System).

GCAP's roots lie in Britain's Tempest project, first unveiled at the 2018 Farnborough Air Show. In December 2022, the UK, Italy, and Japan agreed to co-develop, formally launching GCAP with a target of fielding a sixth-generation fighter by 2035. In June 2025, the industrial joint venture Edgewing was established, with BAE Systems, Leonardo, and JAIEC each holding 33.3 percent.

GCAP is not about building a single airplane. It is about constructing a "system of systems"—a manned fighter at the center, surrounded by unmanned wingmen, sensor networks, and a combat cloud, all interconnected.

FCAS, led by France's Dassault, centers on its Next-Generation Fighter (NGF) and integrates drone swarms, a combat cloud, and satellite communication networks. The program passed its Final Design Review in June 2025 and is moving toward a full-scale flight demonstrator by 2029.

4. South Korea, Japan, and Others: KF-21 Boramae, NACS, and Regional Challenges

South Korea's KF-21 Boramae is the nation's first indigenous supersonic fighter. Since its first supersonic flight in January 2023, it has completed over 120 test flights, with mass production scheduled from 2026. Added to this is the NACS (Next-generation Air Combat System) program, combining AI and unmanned aircraft.

Korea Aerospace Industries (KAI) unveiled the NACS concept at a National Assembly seminar in late 2024: a scenario in which a single KF-21 simultaneously controls 4 to 16 drones. Because the KF-21 is an indigenous platform, South Korea holds the source code access rights. AI algorithms and communication protocols can be freely modified and optimized. No need to wait for Lockheed Martin's permission.

Japan reconfirmed GCAP's 2035 fielding schedule at the 11th GCAP Development Promotion Committee meeting in December 2025. Israel leads the world in AI drone technology with weapons like the Harop loitering munition. Turkey's Bayraktar Kizilelma, reaching Mach 0.9, entered mass production in 2024.

Part VI: Sixth-Generation Fighters and Intelligent Formations

1. What Is a Sixth-Generation Fighter: Beyond Stealth

If fifth-generation fighters revolutionized warfare with stealth and sensor fusion, the sixth generation layers AI autonomy and networked combat capability on top. A sixth-generation fighter is not a single airplane. It is a "system of systems." A manned aircraft at the center, surrounded by unmanned wingmen, sensor networks, and a combat cloud, all linked as one.

The U.S. Air Force's NGAD (Next Generation Air Dominance) program was originally conceived as a successor to the F-22 Raptor, but pivoted in 2024. Rather than pouring tens of billions into a single ultra-expensive manned stealth jet, a combination of more affordable platforms and CCA drones was judged superior in cost-effectiveness.

In Europe, the two mega-projects GCAP and FCAS are both premised on manned-unmanned cooperative combat. In China, aircraft believed to be the J-36 and J-50 have been spotted. The battle in the sky has entered an era where victory is decided not by the performance of a single aircraft but by the collective intelligence of the entire formation.

2. Intelligent Formations: Fighting with the Intelligence of the Swarm

The future of air combat is not a duel. It is a clash of swarms. The basic unit will be one human pilot leading several AI drones. Drones scout ahead, jam to the sides, and resupply weapons from behind. The human's role shifts from maneuvering to decision-making: setting tactical objectives, assigning mission intent to drones, and controlling the rules of engagement.

For a swarm of drones to fight effectively, the swarm members must trust each other, share information, and compensate for each other's mistakes. Humans cannot do this in real time. AI becomes the nervous system of the formation.

The problem is that the enemy also uses AI. Human-AI air combat ultimately becomes algorithm versus algorithm. Fake targets, fake signals, fake link tracks—the enemy sows data poison that feeds the AI. Paradoxically, the pilot of the AI era must train more classical skills: basic maneuvering, basic situational awareness, basic communications. When the link dies and AI becomes confused, survival depends on the human's airmanship and tactical instinct. The more technology advances, the more brutally important fundamentals become.

3. Hypersonics and Drone Swarms: The New Threat Environment

Hypersonic missiles arrive at Mach 5 and above. Hundreds of drones swarm in like a cloud of bees. These two threats fundamentally shake existing air defense systems.

The true power of the drone swarm lies in numbers. When dozens of targets appear on the enemy's radar screen, their air defense system freezes. While they decide what to shoot first, precious seconds slip away. In that chaos, the real predator—the manned fighter—delivers the decisive blow.

U.S. Air Force Chief of Staff General Kenneth Wilsbach stated that CCAs will confuse the enemy and contribute to gaining air superiority. The 2025 U.S. Air Force report is explicit: CCAs complement, not replace, manned fighters. Paired with fifth-generation jets, they will be the key to dominating future high-intensity battlefields.

4. Cost Exchange Ratios and the Economics of Attritable Drones

A single F-35 Lightning II costs well over $100 million. If the enemy shoots it down with a missile costing a fraction of that, the enemy wins strategically—even if we fought well tactically. Like capturing a queen with a pawn in chess. A card was needed to flip this unfavorable math.

That card is the attritable drone. The goal is for it to complete its mission and return, but if it is shot down, the strategic damage is limited. A pawn you can sacrifice to protect the king.

Produce 250 to 500 XQ-58A Valkyries per year and the unit cost drops below $2 million. Same budget: two F-35s or 100 Valkyries. Two F-35s can attack two targets simultaneously. One hundred Valkyries can hit 50 targets at once, or concentrate 10 on each of 10 targets.

"Quantity has a quality all its own." That old maxim takes on new meaning in the AI era of air combat.

5. Generative AI Applications: Automating Mission Planning, Maintenance, and Intelligence

Movies only show pilots pulling off spectacular maneuvers. In reality, a pilot's life is 90 percent paperwork and 10 percent flying.

Generative AI is fundamentally changing this. In March 2025, the Defense Innovation Unit (DIU) announced Project Thunderforge. Developed jointly by Scale AI, Anduril, and Microsoft, the system accepts commander inputs like "neutralize the air defenses at Point A" and generates dozens of operational scenarios by analyzing thousands of variables in an instant.

According to U.S. Army Major General Robert Cloud, in the DASH-2 exercise, human staff produced three courses of action in 16 minutes. AI produced ten in 8 seconds. Four hundred times faster.

In December 2025, the Pentagon took a historic step: launching GenAI.mil, a website providing generative AI tools to all 3 million military personnel, civilians, and contractors. The Secretary of Defense declared: "The future of American warfare is here. It is called AI."

In maintenance, AI finds patterns in data. "Engine turbine blade 3's vibration pattern is abnormal. 78% probability of failure within 48 hours." Such predictions become possible. The U.S. Air Force's PANDA system already monitors over 3,000 aircraft. Lockheed Martin and ManTech announced an AI-based aircraft maintenance partnership in December 2025.

The age of preparing for war with Excel files and PowerPoint is over. The future of war is algorithm versus algorithm.

Part VII: The Ethics and Law of Algorithmic Warfare

1. The LAWS Debate: Who Pulls the Trigger?

Altitude 30,000 feet, speed Mach 1.2. Outside the canopy is pitch darkness. Inside the cockpit, the Radar Warning Receiver chatters ceaselessly. An enemy surface-to-air missile radar has locked onto me. My finger rests on the red button on the stick. The weapons release button. That cold metallic touch. My heart pounds.

At this moment, I ask: Should I pull this trigger?

Is the target correct? Are there civilians? Is my wingman clear? Hundreds of training sorties and real combat missions inform my judgment. I decide. Fire. The missile leaps from the rail. Whatever the result, the responsibility is mine. That was the rule I upheld for 20 years as a fighter pilot.

But now, that rule is being shaken.

Lethal Autonomous Weapon Systems—LAWS—have appeared on the battlefield. These machines find targets, track them, and destroy them without human command.

Look at Israel's Harop drone. It loiters in the sky, detects an enemy radar signal, and dives to self-destruct. Turkey's Kargu-2 drone reportedly tracked and attacked retreating soldiers autonomously in Libya's civil war in 2020.

In 2024, the UN General Assembly adopted a resolution on autonomous lethal weapons by an overwhelming margin. 166 nations voted in favor; only three opposed: Belarus, North Korea, and Russia.

Why is this a problem? Pulling the trigger is not merely a physical act. It is a moral decision. When a medieval knight swung his sword, he looked into his opponent's eyes. If the opponent surrendered, the knight stayed his blade. Humans are capable of mercy. They know hesitation.

Machines do not.

The proponents have a point. They say: "Machines do not get tired. They are not gripped by fear. They are not consumed by vengeance." This is true. But war is not theory. War is chaos.

There is a larger problem still. The threshold for war is lowered. A war without body bags is a sweet temptation for politicians. If only broken metal comes home, media criticism fades. Nations may use force more frequently, more lightly. An endless era of algorithmic conflict could dawn.

I ask: What qualifies one to pull the trigger?

As a pilot, I believe that qualification is responsibility. When I fire a missile, I carry the weight of the outcome. Sleepless nights. Nightmares. A lifetime of memory. That is the cross of the human warrior. Machines do not know that weight. There is no line in the code for "guilt."

We can borrow efficiency from machines. But we must not sell our souls.

2. Human-in-the-Loop

The cockpit is cramped. Turn your shoulders and you touch the canopy. Yet in this narrow space, the pilot must process hundreds of inputs simultaneously.

That is why we need systems. Radar finds the enemy, the computer calculates the firing solution, warning devices flag threats. But at the end of every process, the finger pressing the launch button is still human. That is the core of "Human-in-the-loop."

Three levels exist.

First: Human-in-the-loop. The human directly approves each engagement. The machine says "Target here." The human says "Confirmed—fire."

Second: Human-on-the-loop. The machine operates autonomously, but a human supervises. If something looks wrong, the human intervenes and stops it.

Third: Human-out-of-the-loop. The human is removed entirely. The machine handles everything from start to finish.

The problem is that the modern battlefield moves too fast. Hypersonic missiles arrive at Mach 5-plus. Hundreds of drones swarm in. Can a human press an approval button for each engagement in time? The human reaction speed—roughly 0.2 to 0.3 seconds—cannot keep pace with machines operating in nanoseconds.

The "approve" button, if required every time, becomes a bottleneck. A slow drone is a good target. But if you hand control entirely to AI, the accountability problem for misidentification and friendly fire remains. The future will likely be graduated. Defensive autonomy expands first: evasive maneuvers, threat avoidance, jamming countermeasures, formation realignment. Offensive autonomy will carry stricter rules and constraints. AI takes on the body of the fight; humans retain the will.

3. The Accountability Gap: When AI Commits Friendly Fire, Who Stands Trial?

In March 2003, during the Iraq War, a Patriot missile shot down a British Royal Air Force Tornado GR4. The IFF system malfunctioned. Two British pilots were killed. This was a fratricide caused by an automated system.

If AI autonomously decides to strike and hits civilians or allies, who bears responsibility? The programmer who wrote the code? The commander who ordered deployment? The defense contractor who built it? Or the AI itself?

Current international humanitarian law assumes a human decision-maker. The Geneva Conventions' principle of proportionality dictates that civilian harm from an attack must not be excessive relative to the military advantage gained. But when AI makes that judgment, who evaluates proportionality?

The April 2025 U.S. Air Force doctrine document (AFDN 25-1) mandates "transparent and explainable algorithms" and "regular audits and evaluations." But this alone is insufficient. The legal framework cannot keep pace with the speed of technological development.

The accountability gap is not a technology problem—it is an institutional one. Algorithm audit systems that trace AI decisions, digital black boxes that record AI weapons usage outcomes, and clear chains of responsibility codified in law are all necessary.

Part VIII: Future Outlook and South Korea's Choice

1. The Evolution of KF-21 Boramae: KF-21EX and the Adaptive Autonomous Platform (AAP)

The KF-21 Boramae is a turning point in South Korea's aerospace history. But the real story begins here. KAI's envisioned KF-21EX—the extended variant—is not a mere upgrade. It is a platform redesigned as the command node for manned-unmanned cooperative combat.

The AAP (Adaptive Autonomous Platform) is not a simple decoy. Attach a reconnaissance sensor and it becomes eyes. Attach an electronic warfare pod and it becomes a shield. Attach a self-destruct warhead and it becomes a missile in itself. The word "adaptive" is key—it means the drone's mission equipment can be swapped out like LEGO blocks. Today's mission may call for a reconnaissance module; tomorrow, an electronic warfare pod; the day after, a self-destruct warhead.

KAI unveiled its Next-generation Air Combat System (NACS) concept at a National Assembly seminar in late 2024: a single KF-21 simultaneously controlling 4 to 16 drones. South Korea holds the source code access rights to the KF-21 as an indigenous platform. AI algorithms and communication protocols can be freely modified and optimized without waiting for permission from foreign contractors.

The hardest part of Korean MUM-T is not getting drones to fly. It is getting them to fight together. Coordination starts with data links but does not end there. In an electronic warfare environment, data is disrupted, delayed, and distorted. So the AAP must have two modes: when the link is alive, humans provide tactical command; when the link dies, the drone acts autonomously based on mission intent—finding radar emitters, assessing threat levels, and surviving within approved rules while humans serve as supervisors rather than pilots.

Industrially, the blueprint is clear. The KF-21EX becomes an upgradeable core node, and the AAP becomes a mass-producible attritable node. When high-quality few and low-cost many are mixed, combat power explodes. Just as Roman legions mixed elite legionaries with auxiliaries to dominate battlefields, Korean MUM-T combines elite manned platforms with mass unmanned platforms to raise survival odds. From the cockpit, this is intensely practical: having something in front of you to take the hit. That is not technology—it is insurance. And in war, insurance is always expensive. Drones are a way to bring the premium down.

2. AI Pilot Technology and LEO Communications Satellites: KAI's NACS Strategy

The scariest sound in a fighter is not the missile warning tone. You get used to that. What is scarier is silence. The silence of a dead link. The silence of frozen updates. The silence of a battlefield that has suddenly regressed to the 1980s. What NACS aims to eliminate is precisely that silence. The AI pilot is the brain, and the network is the nervous system—this is not romance but structural truth.

The AI pilot technology KAI is developing reverses the brain-capacity allocation of a pilot. AI handles 90 percent of mechanical tasks: flying, detecting, system management. The pilot needs only to make the final call—attack or evade—based on AI-organized information. This dramatically accelerates the OODA loop. While the enemy is still thinking, we are already shooting.

But even the most brilliant AI pilot is limited alone. It needs data—a lot of it, very fast. Enter low-Earth orbit (LEO) communications satellites. Geostationary satellites sit at 35,800 kilometers—too far. Signal round-trip delay is noticeable. Half a second of latency is irrelevant when watching YouTube, but fatal when intercepting a Mach 2 missile.

LEO satellite constellations, orbiting at 300 to 1,500 kilometers, have near-zero latency. Low signal loss enables ground-network-level high-speed communications. This is the core of ultra-space, low-latency 6G communications.

In June 2025, KAI signed a 184 billion won agreement with the Ministry of Science and ICT, the Korea AeroSpace Administration, and IITP for 6G LEO communications satellite technology development. By 2030, two communications satellite bodies will be developed, assembled, tested, and launched. KAI has established a cooperative framework with KT and KTSat, aiming for the world's first 3GPP 6G-standard LEO satellite communications commercialization.

KAI's NACS strategy assumes that the KF-21, drones, AWACS aircraft, and ground control stations are all bound together through this LEO satellite network. Picture a KF-21 flying over North Korean airspace. Its own radar is off—to maintain stealth. But high-resolution imagery from LEO satellites and detections from rear AWACS are transmitted in real time to the KF-21's cockpit. Simultaneously, the KF-21's AI analyzes this information and issues attack commands to accompanying drones. All in milliseconds.

South Korea has the opportunity. It possesses the world's leading semiconductor technology and communications infrastructure. It has secured launch vehicle technology for LEO satellites. It has an abundance of talented people who can write AI algorithms. If all these elements are properly forged in the crucible of NACS, South Korea will build a uniquely Korean aerospace power, different from that of the United States or China.

3. Defense Industry and Technology Sovereignty: K-Defense's Next-Generation Challenge

At 30,000 feet, tracking an enemy fighter, the Radar Warning Receiver screams. An enemy SAM radar has locked on. In that instant, I flick chaff and flares with one finger and activate electronic warfare systems. The aircraft responds; I survive. But what if the chip inside that electronic warfare system is supplied only from abroad? What if the software update authority rests in another country's hands? In wartime, "please wait for parts delivery" is a death sentence for a pilot.

Technology sovereignty means the ability to independently create, repair, and advance the technology your country needs. In the world of fighters, this difference decides life and death. A weapon that requires someone else's permission to function in combat can hardly be called a weapon.

When Russia invaded Ukraine in 2022, the world watched defense factories strip chips from refrigerators and washing machines after Western sanctions cut off advanced semiconductor imports. In modern war, semiconductors are more important than oil. Without oil, tanks stop. Without semiconductors, the fighter's brain stops.

K-Defense—South Korea's defense industry—has achieved remarkable results. K9 howitzers roll across Polish plains. K2 tanks undergo trials in Romania. FA-50s fly the skies of the Philippines and Malaysia. In 2023, South Korea ranked 9th globally in arms exports; by 2025, exports exceeded $20 billion.

But behind the success lurk challenges. K-Defense's strengths have been fast delivery, reasonable pricing, and reliable quality. Going forward, however, the fight is on a different level. Hardware capability alone is insufficient. Self-reliance in software, semiconductors, and artificial intelligence—the invisible domain—is the real battleground.

Engines are a prime example. South Korea built the KF-21 Boramae superbly, but its heart is a GE engine from the United States. If America halts engine supply, the KF-21 cannot fly. Engine development is difficult—single-crystal alloy technology, precision casting—capabilities only a handful of nations possess. But to lead the sixth-generation era, this wall must be overcome. You cannot finish a marathon with someone else's heart.

Technology standards also become weapons. Whoever sets the interface, whoever defines the data format, whoever establishes the communication protocol between manned and unmanned aircraft—that party becomes the center of the ecosystem. If South Korea truly wants to lead, the goal must not be "we built it too" but "others are building to our standard."

South Korea's geopolitical position is harsh. Surrounded by great powers, with a shrinking population and declining military manpower. In this situation, the available strategy is the hedgehog strategy—small but armed with lethal quills. Those quills are K-Defense, armed with AI and advanced technology.

Ultimately, technology sovereignty is not a matter of ethics or pride. It is a matter of tactics. In an era when adversaries attack supply chains, shake satellites, and contaminate data through cyber operations, a force that cannot self-repair and self-upgrade is no different from a model in a glass display case. What survives in the sky is not the airframe—it is the system. Holding that system in your own hands is the essence of K-Defense's next-generation challenge.

4. The Future of Air Combat Written by Humans and AI Together

At night, above the cloud layer, the world splits into two. Stars and moonlight above, a black sea of clouds below. Inside the cockpit, dim instrument lights fill the space, my hand on the stick, feet on the rudder pedals. This posture has not changed in decades. What has changed is what I see and believe. Once, my eyes and my radar were the entirety of the battlefield. Now, the battlefield organized by AI, the targets synthesized by AI, the threat predictions calculated by AI—these become my reality.

The future formation looks nothing like today's. The model of one pilot in one fighter is fading. One human commanding multiple drones becomes the new norm. Drones scout ahead, jam to the flanks, and replenish weapons from behind. The human's role becomes not maneuvering but deciding: setting tactical objectives, assigning mission intent to drones, and controlling rules of engagement.

How much control do we hand to AI? The concept of "meaningful human control" exists—a mechanism to ensure ethical, safe use of autonomous systems. But research shows significant barriers to realizing this in highly autonomous AI systems. Temporal constraints come first: in hypersonic combat, AI makes decisions faster than a human can intervene.

Victory is determined not by a single brilliant maneuver but by a system that continuously updates. Who learns faster, who revises tactics faster, who patches vulnerabilities faster. War increasingly resembles software. In software warfare, the winner is always the same: the side that updates faster, has greater resilience, and designs so that humans can make the right call at the last moment.

In the cockpit, I still grip the stick. But beside me sits an invisible pilot. That pilot has no fatigue, no fear, no ego. In their place are errors, vulnerabilities to deception, and overconfidence. The air combat of humans and AI is an era of flying with this imperfect companion. The sky of the future may grow smarter, but it is also likely to grow more brutal.

Who will rule the skies? The answer is clear. Neither AI nor human alone. The human perfectly fused with AI. There is no need to fear the technology. But we must guard against losing control of it. A knife in the hands of a chef produces a fine meal; in the hands of a robber, it becomes a weapon. AI, too, depends on how we use and control it.

The path for South Korea lies here as well. Beyond developing AI technology, the nation must cultivate wise warriors who can operate it. At the point where technology and humanity, ethics and efficiency find harmony, true self-reliant defense and peace can be achieved.

The Top Gun of the future will no longer be a lone wolf. He will be a battlefield conductor, linked to his AI wingmen through a digital neural network. And that sky will still belong to the courageous.

Epilogue: Who Will Rule the Skies?

In the summer of 2024, two F-16s faced each other over Edwards Air Force Base. Two fighters charging toward one another at 900 kilometers per hour. In one of them, no one sat in the cockpit. AI held the stick. One hour of fierce aerial combat. Perhaps the outcome had been foretold. The human pilot lost.

But does this mean the end?

I flew for decades. On Wild Weasel missions, I plunged into the heart of enemy air defenses. In the cockpit, with the Radar Warning Receiver screaming, I crossed the line between life and death countless times. When G-forces crushed my body, when my vision narrowed and consciousness faded, my fingers never left the stick. Human limits. They exist without question. Under 9Gs of pressure, my heart had to beat three times its normal rate, and I had to strain every muscle in my abdomen to keep blood flowing to my brain.

AI knows none of that suffering. No fear. No fatigue. No hesitation. It judges and reacts in milliseconds.

So is the age of Top Gun truly fading? Must human pilots, like Maverick in the movie, become relics of the past?

Looking back through history, this question is not new. In 1957, American military leaders declared the age of manned fighters over. Missiles had arrived, they said; pilots were no longer needed. Yet 70 years later, humans still sit in the cockpit. The Vietnam War overturned that prediction. Missiles alone could not dominate a complex battlefield. Human judgment, intuition, and situational understanding were required.

Think back to the Roman legionary standing on the battlefield with his gladius. That short sword was not merely a weapon. It held the spirit of a warrior who had to fight eye-to-eye with the enemy in close formation. The same was true of the medieval longbowman—the strength to draw the string, the eye to read the wind, the instinct to predict the enemy's movement. Weapons have changed, but the human role in wielding them has never disappeared.

The war in the sky will be no different.

What we are witnessing is not an ending but an evolution.

In 2025, Sweden's Saab Gripen E flew with an AI called Centaur alongside a human pilot. This AI was not built to replace the pilot. It was built to assist—analyzing complex sensor data, prioritizing threats, and providing the information needed for the pilot to decide. AI is, in effect, a tireless combat secretary.

The U.S. Air Force's Loyal Wingman concept operates in the same vein. A human-piloted fighter serves as flight lead, and AI-controlled drones fly alongside, guarding the flanks. Much like an ancient general advancing into battle surrounded by his bodyguard. Shield AI's X-BAT can take off and land vertically without a runway. Over 2,000 nautical miles of range and ceilings above 50,000 feet. Aircraft like these operating alongside F-35s—that future is drawing near.

Lockheed Martin is working to make the F-35 an "optionally manned" aircraft. The same jet can carry a human pilot or fly entirely by AI, depending on the situation. Dangerous missions go to AI; complex judgment calls bring the human in.

China is not standing still. Prototypes believed to be the J-36 and J-50 sixth-generation fighters have appeared in the sky. In August 2025, another stealth airframe was spotted. In Europe, GCAP and FCAS proceed simultaneously.

And there is South Korea.

The KF-21 Boramae is Korea's first indigenous supersonic fighter. The NACS program is layering AI and drones on top of it. Korea Aerospace Industries envisions a manned-unmanned combined system linked through low-Earth orbit communications satellites. For a small nation to survive among great powers, it must be smarter. Combining the human mind with the machine's speed. That is our path.

If asked who will rule the skies, I would answer this way.

Neither human nor AI. The one who stands with both will rule the sky.

Just as the knight became one with the horse to charge across the battlefield, the future combat pilot must become one with AI. When the radar warning sounds, when the enemy missile flies, in the moment a millisecond decision is needed, AI analyzes the data and presents the options. But the final decision to pull the trigger—the weight of that choice—remains with the human. Ethical responsibility. Legal responsibility. The responsibility of standing before history. That much cannot be handed to a machine.

Today I look up at the sky. Someday, what cuts through that blue expanse may be human, machine, or some combination of both. One thing is certain: the sky has always belonged to the most courageous, the most wise, and the best prepared. That was true on battlefields before Christ, in the biplane era of World War I, and in the age of the F-22 and F-35.

In the coming age of AI, that truth will not change.

To the heirs of the fighter pilot: Do not be afraid. Adapt. Evolve. Just as pilots of the past transitioned from propellers to jet engines, from mechanical controls to fly-by-wire, you too must move toward coexistence with AI. That is the way to survive, the way to victory, and the way to become the ruler of the skies.

The infinite sky beyond the canopy. It is still a domain that needs human dreams and courage. No matter how far AI advances, only humans know why we fight and what we protect. The technology of war changes, but the essence of war does not. And at the very center of that essence, a human always stands.

Who will rule the skies? The answer, in the end, rests with us.


KIMKJ.COM

#KyungJinKim #AIFighter #ArtificialIntelligence #AI #AIPilot #UnmannedCombat #TopGun #DARPA #F16 #F35 #CCA #LoyalWingman #AerialCombat #MilitaryAI #DefenseTech #KF21 #AutonomousWeapons #ReinforcementLearning #6thGenFighter #DroneSwarm #AIEthics #DefenseAI #kimkj #kimkjcom #AIAirForce #FutureWarfare #DroneWarfare #GCAP #FCAS #KDefense


Scroll to Top