I often get questions from prospective pilots about self-flying aircraft and the prospects for the pilot profession. With all the hype that has been heard over the last 30+ years it’s a natural question, and with large manufactures beginning to study it , it comes up more and more. The hype has been loud enough that in this authors opinion it has begun to affect pilot career and supply decisions. I’ve seen people purposely avoid flying for large cargo carriers because some of this hype has eroded their confidence in a long successful career without being replaced by Self Flying aircraft. In a twist of irony, one of the reasons Boeing is looking at self-flying aircraft is a lack of pilots, unfortunately, this hasn’t done much to satiate the real problem by signaling a future to the public that doesn’t appear promising for aspiring young aviators, and therefore in a way becomes a self-fulfilling prophecy.
So with that premise, I’ve attempted to present a few concepts that will hopefully clarify a few things for those interested in pursuing a career in aviation as Aviators. I think you may find that the hurdles that have to be overcome are quite extensive for autonomous airlines to really take hold.
News articles heavily emphasize that computers fly aircraft most of the time. While this is true(sometimes) it often misrepresents a pilot’s true role which does a real disservice to the profession. In these articles, pilots are relegated to the role of stick and rudder masters, as their loftiest purpose, and these articles usually show that this lofty perch is quickly becoming antiquated by the advent of advanced autopilot systems. The truth is pilots are not there nor have ever really been there for just their stick and rudder skills, (although these skills are important for pilots to master for lots of reasons), they are there for their judgment. The following quote is interesting in the context of the conversation on the benefits of automation, even though it came almost 80 years ago.
The readiness to blame a dead pilot for an accident is nauseating, but it has been the tendency ever since I can remember. What pilot has not been in positions where he was in danger and where perfect judgment would have advised against going? But when a man is caught in such a position he is judged only by his error and seldom given credit for the times he has extricated himself from worse situations. Worst of all, blame is heaped upon him by other pilots, all of whom have been in parallel situations themselves, but without being caught in them. If one took no chances, one would not fly at all. Safety lies in the judgment of the chances one takes.
— Charles Lindbergh, journal entry 26 August 1938, published in The Wartime Journals, 1970
Pilots are there for their innately human qualities, which if not properly trained can also be weaknesses. The following points are some of the qualities that pilots uniquely contribute to the flight deck environment.
A pilots’ ability to take years of experience/education and apply it to a new dynamic situation. No flight is ever the exact same. In the FOI(Fundamentals of Instructing) we call this not just rote learning, or understanding or even application, it is the highest level of learning which is called correlation. Comparing pilot learning to AI it is instructive to know that there are about four types of AI. The first type of AI is reactive, similar to IBM’s Deep Blue and the second which has limited memory is able to develop more accurate models of the world around them. The last two which includes a level of empathy and then self-awareness have never been developed (although movies and people talk about them a lot). The last two are what are really required to consider replacing pilots.
Ability to Empathize and Feel. AI and computers currently and at least in the near future cannot empathize. They can be programmed to look like they are empathizing but their ability to truly “put themselves in someone else’s shoes” is currently the stuff of science fiction. Their ability to truly feel these emotions of respect and empathy for something is important. For example, even though a company policy may require an aircraft to avoid high-level radar returns by 20 miles (Note-not all dangerous storms provide a clear radar return), a pilot in the flight deck will have a whole different perspective and respect for a MESO cyclone igniting with lightning and will likely give it more distance. In the same scenario, a computer would be limited by its nature and would be required to an emotionless rule, or complicated algorithm without realizing that that radar return represented a much more severe threat let alone miss the connected overhang extending from the system that was full of hazards but not reflected on the radar.
The pilot’s ability to empathize allows them to more effectively communicate and work with other human beings and systems. Being able to relate to other perspectives is a powerful thing, important when making judgments. It can also be what motivates us during a medical emergency, crew or passenger issue. It is part of what allows us to “weigh decisions” not just in the realm of “pure logic” and "Neural Net variable weighting" but in the realm of meaning. Just ask passengers how frustrated they become when they meet a gate agent that has been trained to be an emotionless robot, and how limited their needs are met.
The truth is we as human beings trust not just what we understand, but more importantly, we trust what understands us especially when our lives are on the line and the decisions of who we trust matters to us.
We(humans) are uniquely adapted to an imperfect world, computers not so much, just think back to the last time you saw a blue screen on your computer. As simple as things would be to have everything as an “On” and “Off” or “0” and “1”, it simply is not the case. We live in an unruly, chaotic, analog world, and it is at times the bane of engineers’ existence. Not to worry though as humans we are quite at home in this world, as babies we often bristle at the quietness and rigidity of our ordered homes we come into and relish the movement and sound of the natural world. Pilots are the components of the aircraft system that best adapt the aircraft system to these realities.
Problems to us like sandbags blocking a road or a stalled car in front of us, are intuitively obvious to us but usually not to automated vehicles. We deal with these challenges with relative ease, when autonomous tranes of thought are literally stuck. Similarly, in flight pilots effortlessly side step bumpy or even dangerous cumulous buildups, that don’t show up on radar or look that different from other clouds, and like the flights themselves every cloud is evolving and changing.
Our ability to forecast weather as humans and computers is still a bit of an art, and pilots use a great deal of intuition in a very wide range of flight circumstances to deal with it. Forecast winds often don’t end up being what was forecast, weather systems can stall or do things not planned for. It is important to note - Not every risk is apparent to computer sensors. As just one example, pilots usually can sense the potential for wind shear before the automated systems issue a warning or a caution. It is important to note that the autonomous warnings act as a confirmation/fail safe and assist pilots in dealing with such threats and do well to augment the human element.
As the most flexible component of the aircraft, pilots can make up for system errors or component failure. If we start combinatorial indexing the list of possible aircraft failure with hundreds of thousands of parts those failure options get very, very big. An aircraft that has had one of the many failure points on an autopilot fail, can still operate and generate revenue with a human pilot. The more complex aircraft have gotten the greater the exposure to failure or glitches.
On airline aircraft, it is not uncommon to see erroneous indications on a daily basis, and in those situations, humans correlate the bigger picture from available information (which sometimes includes the millions of sensory nerves on each of our bodies) to corroborate a story the aircraft may be telling us. Autonomous vehicles will be far more sensitive to equipment malfunction or failure than manned vehicles and therefore, minor details that at one time never grounded expensive aircraft may become much more common.
Break rules. In our unruly world, we sometimes have to break standard operating procedure to make things work. For example, it might be worth accepting an approach speed a few knots above the requirement for a stabilized approach, if the situation merits it and there are definitely airports and situations that can merit it. There are also many times that don’t merit it.
For good judgment to work, there must be accountability. Computers lack real accountability. In the moment when things might count the programmers and manufactures simply aren’t there. When a pilot is sitting up front, passengers know that that person has a vested interest in the success of their flight that goes beyond liability, programming, cost or even company reputation. This vested interest and our “humanity” push us to make decisions when we may need to compromise some general rules, that computers might be unemotionally programmed to maintain.
On the ground, the consequences of not acting might be just be being late, or maybe a minor accident, in the air it’s orders of magnitude bigger. Do we as humans feel comfortable giving the power to break rules to something that can’t be held accountable for its judgments later? The answer is usually no unless we have no other option, accountability is what gives credibility to most judgment, this accountability is what gives the entity making the choice a healthy appreciation of the consequences. Computers cannot appreciate consequences and therefore when life and death decisions occur, it is difficult to trust a computer outside the realm of its programming. This is why in almost every aspect of deducing meaning from computer models or calculations especially when life or death are on the line a human being evaluates(judges) the information provided by a computer in the context of the human picture.
Challenger was lost because NASA came to believe its own propaganda. The agency's deeply impacted cultural hubris had it that technology — engineering — would always triumph over random disaster if certain rules were followed. The engineers-turned-technocrats could not bring themselves to accept the psychology of machines with abandoning the core principle of their own faith: equations, geometry, and repetition — physical law, precision design, and testing — must defy chaos. No matter that astronauts and cosmonauts had perished in precisely designed and carefully tested machines. Solid engineering could always provide a safety margin because the engineers believed, there was complete safety in numbers.
— William E. Burrows, This New Ocean, 1998.
The FAA I think understands this, and therefore gives pilots Emergency Authority. The FAA trusts a pilot to use whatever decisions are necessary to protect the flight, but with that very high level of trust to break as many rules as are needed there will be a high level of accountability. Therefore pilots weigh many things very carefully (often information and analysis provided by computers) before exercising such judgment. But, even without exercising emergency authority pilots make many decisions on a daily basis that exercise judgment, based upon a great deal of information not always contained within easy reach of computer sensors.
Flexible and cheaper. An autonomous flight will require a great deal of infrastructure on the ground to work. Currently, Cat 3 autoland function is only available at a select number of very large airports. These airports have a specific need to go through the necessary steps to get approval for those approaches. One of the advantages of flight as a transportation mode is its flexibility, and instead of committing to large infrastructure projects where the railroad is laid, or construction of hyper loops, or highway, aviation needs an aircraft, runway, and fuel. The majority of airports airlines fly into are not large hub operations, and would be hard-pressed to develop the infrastructure necessary for autonomous all weather operations even with the pilots on board.
Not hackable… autonomous/wireless dependent computer driven aircraft obviously present a very new and real threat in today’s high tech world.
Benefits of Automation
After having covered some of the strengths of the human element, I don’t want to marginalize the benefits of automation and the more sophisticated tools at our (human) disposal. For millennia tools have allowed us to increase our capacity and improve our lives, and it’s no different in aviation. There are something’s that computers and artificial neural nets do very well. At the same time I hope to temper the sometimes overzealous hype of automation, both elements have a place in a minimal risk air transport system. I worry that we may be too confident in our automation’s capabilities.
In flying I have learned that carelessness and overconfidence are usually far more dangerous than deliberately accepted risks.
— Wilbur Wright in a letter to his father, September 1900
As aircraft have become more complex, efficient and capable, and airspace has become more congested these automated systems have continued to evolve. In some cases, under extremely controlled conditions (as close to perfect as we can make the world- VMC, well-equipped airports) we will likely see some autonomous vehicles. The success of those autonomous vehicles will likely be dependent on how well their environment can be controlled unless very high levels of AI can be developed (these currently do not exist).
What is most promising about the continued development of automation is the potential it has to augment the human element. Aviation has experienced the trials and errors of merging the strengths of automation and human strength, for many decades. The automotive world is just starting to catch up.
One case not in aviation but that is particularly interesting is the coordination of these two elements (Computer and Human) to create a level of product that could not be produced by either one alone. When Chess Grand Master Gary Kasparov was beaten by IBM’s Deep Blue back in 1997, he wondered if he would have been more successful had he had access to the large database of previous chess moves Deep Blue had. This article goes into more depth but the result was human’s who could work with and at times override supercomputers like Deep Blue, were usually more successful than either if them working alone.
Similarly in aviation where the costs of error are so high, and unlike automobile automation where a default of “not moving” might be an acceptable inconvenience, it is likely that we will find that the systems that better integrate the differences and strengths of both computer and human systems will best be able to handle the more demanding and dynamic world of aerospace over the next 100 years. The good news is in that world, educated and capable pilots will still be very much marketable.
Side note – A case for Two Pilots
Essentially the need for good judgment in aviation has never changed, and research has shown that 2 heads are better than one especially when it comes to exercising good judgment/problem-solving. Insurance companies have reflected this for quite some time, and only in cases where liability exposure has been reduced by smaller aircraft those aircraft have been permitted to be certified with single pilots. The liability experienced by an 8 seat aircraft is very different than the liability experienced by a 150 seat aircraft. Safety is not necessarily the absence of risk, but it is the presence of acceptable risk, and aircraft that operate single pilot inherently incur additional risk which is why the risk has only been acceptable in smaller aircraft.
In addition, the move from three pilots to 2 was not the same as moving from 2 pilots to 1 and therefore will not be nearly as simple. This is not a trivial thing. Research has also shown there is a law of diminishing return and that value is not always added with additional people beyond a small group but that a group is significantly better than 1 when judgment is concerned.
Judgment and human checks and balance become more relevant in an age where terrorism uses large airliners as weapons of mass destruction, or as is the case in German Wings a human mind goes bad. It’s very rare in our world to give one single person so much autonomous authority. Two people as a team have a better chance if properly trained to stay out of trouble and manage such a large responsibility, than just one.
This is to say nothing of the load shedding that occurs in real-world situations between crewmembers dealing with day to day operations at busy airports, bad weather days or emergency operations. As well as the psychological support the buddy system provides pilots who on daily basis face decisions that have material consequences.
Needless to say, we see in nature the productive nature of pairs especially when they are trained to maximize their capabilities between each other.