DBEA55AED16C0C92252A6554BC1553B2 Clicky DBEA55AED16C0C92252A6554BC1553B2 Clicky
April 25, 2024
Care to share?

In the aftermath of Germanwings 9525, we have learned that the crash of the aircraft was an intentional mass murder. While many questions are yet to be resolved regarding this crash, several issues emerge that need will be drawing future attention in the industry.

Who Do You Trust?

We’ve given the pilots the ability, after 9/11, to lock the cockpit door so that no potential hijacker can enter. Yet we’ve recently seen an Ethiopian pilot do that for political asylum, and now a mass murder of innocent passengers and crew from a deranged pilot. Did we over-react to 9/11 by preventing flight crews from re-entering the cockpit?

In the United States, when a crew member leaves the cockpit, a flight attendant or another crew member must enter until their return, but this is not prevalent elsewhere in the world.  The theory, at least, is that a second crew member could prevent the door from being locked by opening it in the case of an emergency.   This is a logical step that several European airlines are adopting in the aftermath of the crash to prevent a similar situation from arising again, but may be overdue. It seems sad that “trust but verify” will soon become the watchword for an industry that prides itself on professionalism and safety first.

Should this Impact Future Aircraft Designs?

In modern fly-by-wire systems, we see flight envelope protection measures in place, including Airbus “alpha protection” regarding high angles of attack and stall protection, and automated limitations on pilot control of the airplane. The question now is whether we need additional controls, or continue to trust pilots?

In this case, the pilot commanded the autopilot to 96 feet, according to news reports. However, this was in an area with a minimum en-route flight altitude of more than 6,000 feet on flight charts. Should the autopilot software introduce additional protections to double-check the aircraft’s position on a GPS, determine the required minimum altitude, and prevent the pilot from commanding a descent below minimums? (Recall the 2012 SuperJet demonstration flight over Indonesia which would have benefited from this) All of that information is available today, but not integrated to a level of detailed logic within avionics. Should we be designing more fail safes into that logic? And under what circumstances could, or should they be over-ridden? Can we really accommodate all potential eventualities in software, and what happens when something does go wrong, mechanically or otherwise?

We’ve seen human error result in crashes of aircraft with high technology, including an air to air collision over Switzerland in which both aircraft had TCAS systems commanding different directions to avoid a crash. In this situation, an air traffic controller gave a command to ascend into an oncoming aircraft, while the pilot ignored the proper collision avoidance information to descend from his instruments, resulting in a collision when both aircraft pulled up instead of one pulling up and once descending, as the system was designed. Do we trust the human in air traffic control, or the instrument on our panel?

Today’s technology enables aircraft to literally fly themselves from take-off to touchdown, and some would say that pilots are really only needed when something goes wrong. But the danger of over-reliance on automation was clearly shown by the Asiana Airlines 214 crash in CAVU weather in San Francisco in 2013, as the pilots were incapable of making a visual approach when the ILS system was shut down for repairs. Over-reliance on technology can lead to disasters as well, if pilots have little experience in flying the aircraft manually, when things do go wrong.

The Drone Approach?

Should the industry take automation to the next level and enable ground control of the aircraft in the case of hijackers, or an emergency in the cockpit, using the same mechanisms used to remotely fly drones? This could enable ground personnel to bring the aircraft to a safe landing at a nearby airport and override the cockpit when something goes awry. But this opens up an entirely different set of issues, including cyber-security to prevent a rogue party from remotely taking control of an aircraft with those capabilities. The trade-offs between design choices are not always easy and straightforward. In addition Boeing’s “uninterruptible autopilot system” is worth looking into if for no other reason than when an aircraft deviates from its flight plan, remote control could increase safety.

The Bottom Line

Human error is an element of life, which we all must deal with. But in aviation, where the consequences of a mistake can be fatal, we require multiple layers of safety, including redundant systems. Most accidents result from multiple failures, rather than just one. Each accident raises the question of what could be done differently to prevent something similar from ever happening again. We’ve come a long way in aviation safety in recent years, but the system is fundamentally based on trust – trust that the pilots want to safely arrive as much as their passengers, and will do everything in their power to do so. When that sacred trust is broken, the system can break down. How we best restore that trust will be debated throughout the industry as we learn more about the Germanwings tragedy.

4 thoughts on “The Human Factor and Trust

  1. Smarter autopilot software could have prevented almost all of the airliner crashes of the past 10 years or more. As hardware has become more reliable, most crashes are now either the result of pilot error or malfeasance. While crashes due to pilot error often begin with a single point of failure in aircraft systems, these have been turned into fatalities by clumsy or erroneous pilot actions. The Air France 447, AirAsia and TransAsia crashes are examples of these. Autopilots should be designed to deal correctly with single point failures rather than turning control over to pilots who are sometimes unable to respond adequately. It would also be straightforward to detect and prevent deviations from flight plans which lead to crashes. The Malaysia Airlines and Germanwings crashes could have been prevented by self-protective software. Autopilot functionality has not kept up with computer hardware and memory size improvements. Hundreds of people have lost their lives as a result.

  2. It has always been the belief that Granny would not fly if there was no pilot in the cockpit. Multiple recent events might now have Granny gratefully to know there wont be a pilot in the cockpit of her flight

  3. To succeed in the herein considered (aircraft flight deck) context, foul intents or direct malfeasance must combine with superior mental faculties and special training. We are being faced with a very special sampling of the human species. It could be so, that the challenge of the Mission Impossible attracts those twisted individuals with a kind of fascination ? In such case, there is really no escape, the danger will always be there. Eg, why are smartphones or similar personal electronic devices being cautioned ? If OEMs introduce an “uninterruptible autopilot system”, the challenge to the mind-twisted Hacker will immediately be to find his way into this new toy and add a new feather to his hat, signing off the first cybernetic airliner hijacking ?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.