An Air Force Pilot Will Battle AI In A Virtual F-16 Dogfight Next Week. You Can Watch It Live. | World Defense

An Air Force Pilot Will Battle AI In A Virtual F-16 Dogfight Next Week. You Can Watch It Live.

space cadet

SENIOR MEMBER
Joined
Sep 2, 2019
Messages
1,338
Reactions
773 15 0
Country
USA
Location
USA

An Air Force Pilot Will Battle AI In A Virtual F-16 Dogfight Next Week. You Can Watch It Live.

1597359780228.png

Mark your calendar, place your bets, get out the popcorn. An Air Force F-16 pilot will take on an artificial intelligence algorithm in a simulated dogfight next Thursday. And you can watch it live.

The Defense Advanced Research Projects Agency is running the program, which is called the AlphaDogfight Trials. It’s part of DARPA’s Air Combat Evolution (ACE) program.

The action will kick off Tuesday with AI vs. AI dogfights, featuring eight teams that developed algorithms to control a simulated F-16, leading to a round robin tournament that will select one to face off against a human pilot Thursday between 1:30 and 3:30 p.m. EDT. You can register to watch the action online. DARPA adds that a “multi-view format will afford viewers comprehensive perspectives of the dogfights in real-time and feature experts and guests from the Control Zone, akin to a TV sports commentary desk.”

With remarks from officials including USAF Colonel Daniel “Animal” Javorsek, head of the ACE program, recaps of previous rounds of the Trials, scores and live commentary, it’ll be just like Sunday Night Football — but on Thursday afternoon.

DARPA says ACE is about building trust, particularly among U.S. fighter pilots, in artificial intelligence as the Pentagon seeks to develop unmanned systems that will fly and fight alongside them.

“We envision a future in which AI handles the split-second maneuvering during within-visual-range dogfights, keeping pilots safer and more effective as they orchestrate large numbers of unmanned systems into a web of overwhelming combat effects,” Col. Javorsek said in a 2019 press release.

1597359839641.png

An F-16 from the 20th Fighter Wing at Shaw AFB, S.C. DVIDSHUB.NET

Eight teams were selected in 2019 to compete in the trials: Boeing BA -0.4%-owned Aurora Flight Sciences, EpiSys Science, Georgia Tech Research Institute, Heron Systems, Lockheed Martin LMT -0.8%, Perspecta Labs, PhysicsAI and SoarTech.

On day one of the competition, the teams will fly their respective algorithms against five AI systems developed by the Johns Hopkins Applied Physics Lab.

The industry/academia teams will then face off against each other in a round-robin tournament on the second day. The third day will see the top four teams competing in a single-elimination tournament for the championship. The winner will then fly against a human pilot in the marquee engagement at Johns Hopkins’ APL.

The point of it all, as Col. Javorsek noted, is to build trust in AI in the life-and-death environment of aerial combat. Trust requires sharing — in this case the kind of information that American fighter pilots will likely demand. With that in mind, I asked DARPA for an interview with the F-16 pilot who will go up against all that math.

That request was denied on the grounds of “operational security.” Col. Javorsek declined to elaborate on the particulars of the dogfight scenario but assured me that careful consideration has been placed on the simulated environment to ensure an engagement in which skill alone will determine the outcome.

1597359894386.png

An F-16 aggressor pilot in a 64th Aggressor Squadron F-16 at Nellis Air Force Base, Nevada.

One can reasonably ask what operational security concerns would prevent divulging the identity of the pilot, his or her experience and the basic parameters of the fight?

America’s adversaries and allies would certainly understand the kind of operationally relevant one-on-one air combat maneuvering engagement that AlphaDogfight involves. Given the known quantity that is the F-16 weapons system and years of intelligence on American fighter doctrine and tactics, they’d surely comprehend it on a granular level.

So would American fighter pilots. The first question they’ll likely ask is, “Who’s the pilot in the simulator seat?” We do know from queries to the Air Force’s Air Combat Command and to Col. Javorsek that the pilot in question is an Air National Guard F-16 pilot from the local area.

Presuming that means the Baltimore-Washington, D.C. area, the pilot may be with the 113 Fighter Wing, 121st Fighter Squadron at Joint Base Andrews outside of D.C. The pilot is apparently a recent graduate of the F-16 Weapons Instructor Course and, as a Guardsman, likely a high-time Viper driver. Combat experience may or may not be on his resume.

DARPA hasn’t detailed the dogfight setup: what sensors, off-board sensors, weapons, ranges, fuel loads, G-limits, visual acuity, weather conditions, merge altitudes, command-control aid or rules-of-engagement the protagonists will go into the fight with. Nor do we know anything about the simulator’s capability/configuration/interaction.

Common sense suggests that if skill alone is to be the determining factor, the simulated aircraft should be the same with everything else equaled-out.

Turn On, Tune In

Colonel Javorsek said in an email that AlphaDogfight viewers will have access to some live data, things like relative closure and distance, throttle/stick/rudder positions via a heads-up-display like presentations on the “Dogfight” or “Pilot Point of View” channels.

1597359972465.png

The "Dogfight Channel" which AlphaDogfight viewers can select on DARPA's ADT TV Page.

DARPA

Nevertheless, we don’t really know what the human pilot or AI team is expecting. If such information isn’t forthcoming to a detailed degree, one would imagine that fighter pilots will not be overflowing with trust or confidence (though they may be entertained), regardless of who wins.

But DARPA sure hopes they’ll watch.

“We are still excited to see how the AI algorithms perform against each other as well as a Weapons School-trained human and hope that fighter pilots from across the Air Force, Navy, and Marine Corps, as well as military leaders and members of the AI tech community will register and watch online,” Col. Javorsek said in a DARPA statement. “It’s been amazing to see how far the teams have advanced AI for autonomous dogfighting in less than a year.”
 

space cadet

SENIOR MEMBER
Joined
Sep 2, 2019
Messages
1,338
Reactions
773 15 0
Country
USA
Location
USA
Day 2 and Heron Systems is in first place, Lockheed Martin in third, and have been trading places with Georgia Tech for 3rd and 4th. Heron looks like they will win this competition, top 4 move on
 

space cadet

SENIOR MEMBER
Joined
Sep 2, 2019
Messages
1,338
Reactions
773 15 0
Country
USA
Location
USA
Wow, Heron AI just wiped the floor with the human pilot, 5 kills to 0. Did it all with gun kills, the AI was able to take gun shots from just impossible angles for a human to do. Humans just aren't as steady I guess. this will really be something if it is put into these loyal wingman drones
 

space cadet

SENIOR MEMBER
Joined
Sep 2, 2019
Messages
1,338
Reactions
773 15 0
Country
USA
Location
USA


AI Claims "Flawless Victory" Going Undefeated In Digital Dogfight With Human Fighter Pilot​

The virtual tournament is part of a larger US military effort to explore uses for artificial intelligence and machine learning in aerial combat.​

BY JOSEPH TREVITHICKAUGUST 20, 2020

1597958813890.png

simulated F-16 Viper fighter jet with an artificial intelligence-driven "pilot" went undefeated in five rounds of mock air combat against an actual top Air Force fighter jockey today. The event was the culmination of an effort that the Defense Advanced Research Projects Agency (DARPA) began last year as an adjacent project to the larger Air Combat Evolution (ACE) program, which is focused on exploring how artificial intelligence and machine learning may help automate various aspects of air-to-air combat.

Heron Systems, a company with just 30 employees, had beaten out Aurora Flight Sciences, EpiSys Science, Georgia Tech Research Institute, Lockheed Martin, Perspecta Labs, PhysicsAI, and SoarTech to claim the top spot in the last of Defense Advanced Research Projects Agency's (DARPA) AlphaDogfight Trials. This three-day event had started on Aug. 18, 2020.

On the first day, all eight teams had spared against five different types of simulated adversaries that Johns Hopkins University’s Applied Physics Laboratory (APL) had developed. This included one dubbed a "zombie," with a flight profile similar to a cruise missile or a large drone, as well as ones that performed like fighter jets, such as the F-16 Viper, or heavy bombers, according to Air Force Magazine.

On Aug. 19, the teams 'flew' against each other, whittling down the number of competitors to four finalists – Aurora Flight Sciences, Heron Systems, Lockheed Martin, and PhysicsAI – who moved on to the last phase. Those four remaining teams then battled each other in semi-finals earlier today.

Lockheed Martin beat Physics AI, while Heron Systems defeated Aurora Flight Sciences. Heron Systems pulled out a major upset over number two ranked Lockheed Martin before going on to face the actual human F-16 pilot, a Weapons School instructor pilot with the callsign Banger, in simulated combat.

This tournament was the third and final trial in a series of events that started in November 2019. That initial trial involved teams flying simulated F-15 Eaglefighter jets, while the second one, which took place in January of this year, shifted to using the F-16 as the representative aircraft. The teams taking part in the competition this week again used digital representations of the Viper.



It's not entirely clear how the outcome of this tournament may now impact the larger Air Combat Evolution (ACE) program directly. DARPA has said in the past that it hopes the event will at least "energize and expand a base of AI developers" for ACE.



"Even though [dogfights are] probably less likely in the future, the need for an ability to handle that sort of situation won’t go away," Air Force Colonel Daniel “Animal” Javorsek, who is the program manager for ACE at DARPA, had told Air Force Magazine in an interview about ACE and the AlphaDogfight effort. "We continue to use it as a gateway into these more demanding scenarios like suppression of enemy air defenses or offensive counter-air."

1597959107548.png

The first phase of ACE is scheduled to wrap up next year and will include flight tests of experimental AI-driven systems to enable various kinds of autonomous capabilities on subscale propeller-driven and jet-powered unmanned aircraft. DARPA has plans for two subsequent phases, each 16 months long, that would transition those systems onto larger aircraft types.

The software and any other systems that come out of ACE, which could help improve the autonomous operation of unmanned aerial vehicles, as well as provide new kinds of automated assistance to the crew of manned aircraft, could then migrate to the Air Force around 2024. “As we are kind of pushing the roles and responsibilities of pilots into this battle manager category, then what we’re essentially doing in this program is enabling the autonomy to be even more capable to handle that aircraft maneuver and these rapid, high-tempo decisions in a dynamic environment,” Colonel Javorsek said.

AlphaDogfight and ACE could certainly help inform a number of different programs ongoing now within the Air Force that are exploring future autonomous and semi-autonomous unmanned aircraft capabilities, as well as the use of artificial intelligence and machine learning in the development of 'virtual co-pilots' for manned types. The best known of these projects is the Skyborg autonomous drone program, which you can read about in more detail in these past War Zone pieces.

The Air Force Research Laboratory's (AFRL) Autonomy Capability Team 3 (ACT3) is also working on a separate suite of systems that it hopes will be ready to control a drone in a dogfight against a manned fighter jet sometime next year. This program is called R2-D2, a reference to the iconic droid from the Star Wars universe whose primary function is to serve as a robotic navigator and flight engineer.

The AlphaDogfight Trials themselves also reflect a broader effort across the U.S. military to explore new, novel ways of engaging with both private companies and academic institutions to help speed up the development of various advanced capabilities. This has included the establishment of multiple technology incubators positioned around the United States, starting with the Pentagon's Defense Innovation Unit (DIU) in 2015.

No matter what, the digital dogfights today certainly underscores the ever-growing interest in artificial intelligence and autonomous capabilities throughout the U.S. military. It's certainly notable that Heron Systems' algorithms were able to go toe-to-toe with an actual Air Force fighter pilot and come out undefeated, but it remains to be seen whether this experience will reflect the outcome of any actual live flight testing in the future. It also may not necessarily represent just how advanced AI-infused autonomous aerial warfare is at present. Regardless, this was a very public display of the future of aerial combat.

Contact the author: [email protected]
 

space cadet

SENIOR MEMBER
Joined
Sep 2, 2019
Messages
1,338
Reactions
773 15 0
Country
USA
Location
USA
some more news in the world of AI


DARPA Wants Wargame AI To Never Fight Fair

Gamebreaker is about building an AI that can play a wargame in the best and most unfair way against its opponents.​

By KELSEY ATHERTONon August 18, 2020 at 6:06 PM
1597962185592.png

A screenshot from “Command: Modern Air/Naval Operations” showing a range of assets and sensors in a naval combat scenario set in 1975. (Image by author)

ALBUQUERQUE: Northrop Grumman is building an AI designed to find new strategies to break virtual opponents. Future AI tools, based on this research, could help human commanders break opponents in real battles.

The contract is part of DARPA’s Gamebreaker program, which wants to turn the design considerations of modern strategy games on their head, using AI to find every unfair advantage hidden in the game.

“Gamebreaker seeks a methodology for finding “broken states” in games – situations in which one player in the game can gain unexpected advantages over a competitor,” Joshua Bernstein, director of advanced intelligent systems at Northrop Grumman, says. “In these applications AI finds asymmetrical conditions in a system (eg, the game or a real-world scenario) and communicates these conditions to stakeholders, such as military planners.”

Gamebreaker program is focused on a range of real-time strategy games, or programs where players command a range of units with different characteristics in competition against each other. These include the popular Starcraft series of games, where players build starfighters, train marines, and extract minerals in a futuristic alien setting. It also includes games like Google Research Football, which uses a physics engine to model physical impacts of virtual competitors players in a soccer game.

Northrop Grumman’s entry will be built in Command: Modern Operations, a hyper-realistic theater-wide combat simulator designed to model Cold War as well as present conflicts.

“If we can figure out a generic method to assess and then manipulate balance in commercial video games, my hope is that we might then apply those AI algorithms to create imbalance in DoD simulated war games used to train warfighters for real-world battle,” Lt. Col. Dan “Animal” Javorsek, the Gamebreaker program manager in DARPA’s Strategic Technology Office, said when the program was announced in May 2020.

Gamebreaker is, especially in terms of Pentagon budgets, an almost minuscule contract, clocking in at just $1 million. Its focus on developing an AI that can win scenarios in one game, and then testing if that AI can win a second game, is somewhat narrow. Yet the implications for more accurate wargaming through thoughtful AI could have a huge impact on how weapons systems are designed, modeled, and ultimately used by human commanders aided by AI agents.

“Wargaming is a well-established and critical element of real-world military planning and weapon system development, especially in complex scenarios such as those our users intend to address with concepts such as JADC2,” said Bernstein. “Gamebreaker is focused on the development of a methodology to explore and identify unique opportunities to defeat an adversary in competition – while we are developing those tools using games, we believe Gamebreaker’s methods will have direct applicability to the military services’ development and employment of joint all domain operational concepts.”

Gamebreaker, part of DARPA’s larger initiative in military AI, is about winning Real Time Strategy games. Combining entertainment with simulation, these games seek to foster both fun and a balanced, competitive experience, one where each player stands a reasonable chance of winning.

This is directly at odds with actual war, where the smoothest path to victory is maximizing every unfair advantage a side can muster against a rival.

To succeed at breaking this balance, teams must build an AI that can play a strategy game, and then, while staying within the rules of the game, figure out how to use all the available pieces in the best and most unfair way against its opponents. It is about novel tactics, without any of the limitations of human understanding holding back how the algorithm plots a path to victory.

“Command offers considerable flexibility for manipulating the capabilities of the simulated combat units, and importantly, the game builds these units based on real-world systems,” said Bernstein. “As a consequence, Command allows us to explore the implicit beliefs a weapon system developer had when designing a given system, allowing us to experiment with novel approaches to various real-world mission scenarios.”

Command: Modern Operations is built on open-source data. That means Northrop Grumman can easily replace the game’s default information with more accurate characteristics, like undisclosed missile speeds or sensor ranges.

For some scenarios, like a Cold War game where Warsaw Pact naval forces use submarines to launch an attack against NATO patrols in the Arctic sea, some of the information about vehicle and weapon capabilities is public and declassified and already incorporated into the game. In more modern scenarios, such as a hypothetical battle in the South China Sea, the game incorporates an open-source understanding of the technical capabilities involved, and the open-source code allows users to update it with more publicly unavailable information.

The open-source code also allows Northrop Grumman to easily insert their own AI agent into the game, and study how it interacts with all the available units. These include systems as large as aircraft carriers down to the precision of guided missiles launched from helicopters. Crucial to how Command: Modern Operations works is the way it models and incorporates sensor data, revealing submarines only where they were last observed, rather than showing them on-screen if they are out of the reach of any sonar systems.

If there exists a novel strategy in the game that allows a commander to more effectively employ, say, ship-launched guided missiles and helicopter-borne sensors, the goal is that the Gamebreaker AI will find it. For initial testing, the Gamebreaker AI will only compete against other AI players, though the potential exists for it to play against human opponents.

Proving that AI can win games is an essential first step to proving that AI can actually offer useful insight to commanders. If humans are going to place trust in an algorithmic order of battle, it is absolutely essential to have faith that the algorithm knows what it is doing.
 

space cadet

SENIOR MEMBER
Joined
Sep 2, 2019
Messages
1,338
Reactions
773 15 0
Country
USA
Location
USA


Navy F/A-18 Squadron Commander's Take On AI Repeatedly Beating Real Pilot In Dogfight​

Everyone has an opinion when it comes to the stunning results of DARPA's AlphaDogfight trials, now hear what the skipper of a fighter squadron thinks.​

BY COMMANDER COLIN 'FARVA' PRICEAUGUST 24, 2020

1598313655303.png

The recent 5 to 0 victory of an Artificial Intelligence (AI) pilot developed by Heron Systems over an Air Force F-16 human pilot does not have me scrambling to send out applications for a new job. However, I was impressed by the AlphaDogfight trials and recognize its value in determining where the military can capitalize on AI applications.

For most military aviators, it may be easy to scoff at the artificiality of the contest. I may have even mumbled, “Never would have happened to a Navy pilot…” Instead, I think it is important not to get wrapped too much around the axle about the rules of the contest and instead focus on a couple of details that really jumped out at me on the advantages an AI pilot would have over a human pilot.

For the contest setup, the argument about the death of the dogfight, or that there is no need for within visual range engagements anymore is a tired one. There was a pretty popular movie in the ‘80s about that very argument, so I am not going to rehash it here. The fact is we still constantly train to dogfight in the Navy, or as it is more commonly referred to 'Basic Fighter Maneuvers,' or BFM for short



BFM is great airborne training for gaining an understanding of your energy state in relation to the enemy and to exercise your situational awareness in a three-dimensional space in a physically demanding environment. An aviator has to understand how to aggressively maneuver their aircraft while at the same time integrating their weapon systems to cue a weapon, assess the quality of the weapons track, and determine if the trigger should be pulled to employ the weapon. All at the same time, they must be preventing the enemy from accomplishing the same process. It is a dynamic and stressful environment that creates better fighter pilots. I have yet to meet a pilot who is an above-average BFM pilot, but struggles in other mission sets.

There are multiple reasons why aircrew may find themselves at the merge with the enemy. But if they do end up at the merge, the goal is always the same: take the first shot to kill the enemy before they can shoot them.

This fact sometimes gets lost in training engagements. To maximize the training, the BFM fight will often be taken to a “logical conclusion.” Even though each aircraft may trade shots early in the fight, the two aircraft will keep fighting down to the hard deck till there is an obvious winner. Aircrew will come to the debrief patting themselves on the back for the gun footage they have of the other aircraft, but once the footage is played, they realize they absorbed the first shot well before their triumphant gun pipper placement.The real-world logical conclusion could have been very different if they were missing a wing or engine because of a missile impact.

The goal at the merge of achieving the first shot must be continually hammered home.

Still, the reality is that missiles do not always guide and fuze, thus we extend fights to teach aviators how to continue to survive or turn a defensive situation into an offensive one. The true sport of fighter jet aviators is a guns-only BFM engagement. A guns-only BFM engagement is a test of who can efficiently maximize their energy package and capitalize on each merge. Much like chess, truly great BFM pilots are thinking two to three merges ahead, not just reacting.






It does not take much skill to put the aircraft’s lift-vector on the other aircraft and yank on the Gs. In fact, if in doubt, just doing that will take care of 75 percent of the fight. But BFM is about being smoothly aggressive. Understanding the difference between when it is necessary to max-perform the aircraft and when it is time to preserve or efficiently gain energy back is key. In a tight turning fight, gaining a couple of angles at each merge can suddenly result in one aircraft saddled in the other aircraft’s control zone working a comfortable rear quarter gun-tracking shot.

In true gamesmanship fashion, the guns-only BFM engagement was the setting for the AlphaDogfight contest. So what jumped out at me about the engagements? Three main points. First was the aggressive use of accurate forward quarter gun employment. Second, was the AI’s efficient use of energy. Lastly was the AI’s ability to maintain high-performance turns.

1598313931682.png

During BFM engagements, we use training rules to keep aircrew and aircraft safe. An example of this is using a hard deck, which is usually 5,000 feet above the ground. Aircraft can fight down to this pretend ground level and if an aircraft goes below the hard deck, they are considered a “rocks kill” and the fight is ended. The 5,000 feet of separation from the actual ground provides a safety margin during training.

Another training rule is forward-quarter gunshots are prohibited. There is a high potential for a mid-air collision if aircraft are pointing at each other trying to employ their guns. Due to the lack of ability to train to forward-quarter gunshots, it is not in most aviators combat habit patterns approaching the merge to employ such a tactic. Even so, it would be a low probability shot.

A pilot must simultaneously and continuously solve for plane-of-motion, range, and lead for a successful gun employment. It is difficult enough for a heart of the envelope rear-quarter tracking shot while also concentrating on controlling a low amount of closure and staying above the hard deck. At the high rates of closure normal for a neutral head-on merge, a gun envelope would be available for around three seconds. Three seconds of intense concentration to track, assess, and shoot, while at the same time avoiding hitting the other aircraft. The Heron Systems AI on several occasions was able to rapidly fine-tune a tracking solution and employ its simulated gun in this fashion. Additionally, AI would not waste any brain cells on self-preservation approaching the merge avoiding the other aircraft. It would just happen. The tracking, assessing, and employing process for a missile is not much different than the gun. I am pretty confident AI could shoot a valid missile shot faster than I can, given the same data I am currently presented within the cockpit.

1598313972644.png

The second advantage of AI was its ability to maintain an efficient energy state and lift vector placement. BFM flights certainly instill aviators with confidence in flying their aircraft aggressively in all regimes of the flight envelope. However, in today’s prevalent fly-by-wire aircraft, there is less aircraft feel providing feedback to the pilot. It takes a consistent instrument scan to check the aircraft is at the correct G, airspeed, or angle-of-attack for the given situation.

Even proficient aviators have to use a percentage of their concentration (i.e. situation awareness) on not over-performing or under-performing the aircraft. AI could easily track this task and would most likely never bleed airspeed or altitude excessively, preserving vital potential and kinetic energy while also fine-tuning lift vector placement on the other aircraft to continue the fight if required.

Lastly is AI’s freedom from human physiological limitations. During the last engagement, both aircraft were in a prolonged two-circle fight at 9 Gs on the deck. A two-circle fight is also referred to as a 'rate fight.' The winner is the aircraft who can track its nose faster around the circle, which is directly proportional (disregarding other tools such as thrust vectoring) to the amount of Gs being pulled. More Gs means a faster turn rate. 9 Gs is extremely taxing on the body, which the pilot in the contest did not have to deal with, either. A human pilot would have to squeeze every muscle in the legs and abdominals in addition to focused breathing in order to not blackout. During training, I maintained 9 Gs in the centrifuge for about 30 seconds. Then I went home and took a nap, and that was without being shot at. AI does not care about positive or negative Gs. It will perform the aircraft at the level required

1598314011197.png

The truth is current aircraft have to be built to support the 'pile of human' sitting in it. The human will always be the limiting factor in the performance of an aircraft. I fight the jet differently now than I did as a junior officer when I was young and flexible. I have to fight differently. I know what my capabilities are to get a consistent and repeatable shot with the little bit of neck magic I have left to keep sight of the other aircraft. The fact that in the contest, the AI had perfect information at all times, and rules of engagement were not a factor, are not inconsequential details. I recognize that providing the amount of data and sensor fusion the AI would require to perform at the same level in a real aerial engagement (one that does not take place in cyberspace) is not a small undertaking and still a bit in the future. The rules of engagement discussion could fill up the syllabus for the entire semester of an ethics class, and will always be a touchy subject with regards to AI's involvement in war.

I am not an engineer, nor an ethics professor. Yet, as a pilot, I am intrigued. A computer model was able to react to the movements of a human pilot and effectively employ weapons. During the five engagements, the AI had 15 valid gun employments and the human pilot had zero. These results also hint at the AI’s ability to avoid being shot while effectively employing its own weapons.

1598314049581.png

An AI-enhanced weapon’s employment system in my aircraft? I am not ready for Skynet to become self-aware, but I am certainly ready to invite AI into the cockpit. Hell, I am only a voting member as far as the flight controls are concerned in the Super Hornet anyways. If I put a control input in that is not aerodynamically sound (i.e. could result in a departure from controlled flight), the flight control system will not move the control surface or will move a different surface to give me the movement I am requesting. Who is flying who?

So, if tomorrow my seven-year-old daughter decides she wants to become a Naval Aviator, I am not going to shoot down the notion and go on a rant about the last generation of fighter pilots. I know there will be a Navy jet for her to fly. My future grandchildren, however? Saddle up kids and prepare yourself for some of Grandad’s wild tails of the greatest flight in Naval Aviation: the one-hour BFM cycle back to the Case One s**t-hot break. Those were the days!

1598314093259.png
 

Nightfox

MEMBER
Joined
Jul 23, 2020
Messages
288
Reactions
377 3 0
Country
Pakistan
Location
Pakistan
Assuming we take this AI program and integrate it into a F16 as was virtually done in the trial what would be the physical limit on the AI due to airframe restraints.The AI can pull 9gs and above but how much more can the airframe go ?.
 

space cadet

SENIOR MEMBER
Joined
Sep 2, 2019
Messages
1,338
Reactions
773 15 0
Country
USA
Location
USA
That depends on the condition of the airframe, and how long does anyone want the airframe to last. If someone over G's an airframe they are supposed to report it and it goes through additional checks. F16N's got retired real early because of being constantly over G even though they had a G limiter, I read a story by one of the pilots that flew them, he said they constantly used the G limiter (but of course you spike over the G limit before the jet catches it) and was his reasoning for the cracks they suffered
 

Nightfox

MEMBER
Joined
Jul 23, 2020
Messages
288
Reactions
377 3 0
Country
Pakistan
Location
Pakistan
Lets assume its a just brand new F16V and the AI is integrated into it.Asking because i believe that in order to get the best out of AI the structural aspect of a jet needs to be overhauled.I saw a few clips of the fight and they didn't use this factor as a limiter so that the fight could go all out.
 
Top