In the aftermath of Germanwings 9525, we have learned that the crash of the aircraft was an intentional mass murder. While many questions are yet to be resolved regarding this crash, several issues emerge that need will be drawing future attention in the industry.
Who Do You Trust?
We’ve given the pilots the ability, after 9/11, to lock the cockpit door so that no potential hijacker can enter. Yet we’ve recently seen an Ethiopian pilot do that for political asylum, and now a mass murder of innocent passengers and crew from a deranged pilot. Did we over-react to 9/11 by preventing flight crews from re-entering the cockpit?
In the United States, when a crew member leaves the cockpit, a flight attendant or another crew member must enter until their return, but this is not prevalent elsewhere in the world. The theory, at least, is that a second crew member could prevent the door from being locked by opening it in the case of an emergency. This is a logical step that several European airlines are adopting in the aftermath of the crash to prevent a similar situation from arising again, but may be overdue. It seems sad that “trust but verify” will soon become the watchword for an industry that prides itself on professionalism and safety first.
Should this Impact Future Aircraft Designs?
In modern fly-by-wire systems, we see flight envelope protection measures in place, including Airbus “alpha protection” regarding high angles of attack and stall protection, and automated limitations on pilot control of the airplane. The question now is whether we need additional controls, or continue to trust pilots?
In this case, the pilot commanded the autopilot to 96 feet, according to news reports. However, this was in an area with a minimum en-route flight altitude of more than 6,000 feet on flight charts. Should the autopilot software introduce additional protections to double-check the aircraft’s position on a GPS, determine the required minimum altitude, and prevent the pilot from commanding a descent below minimums? (Recall the 2012 SuperJet demonstration flight over Indonesia which would have benefited from this) All of that information is available today, but not integrated to a level of detailed logic within avionics. Should we be designing more fail safes into that logic? And under what circumstances could, or should they be over-ridden? Can we really accommodate all potential eventualities in software, and what happens when something does go wrong, mechanically or otherwise?
We’ve seen human error result in crashes of aircraft with high technology, including an air to air collision over Switzerland in which both aircraft had TCAS systems commanding different directions to avoid a crash. In this situation, an air traffic controller gave a command to ascend into an oncoming aircraft, while the pilot ignored the proper collision avoidance information to descend from his instruments, resulting in a collision when both aircraft pulled up instead of one pulling up and once descending, as the system was designed. Do we trust the human in air traffic control, or the instrument on our panel?
Today’s technology enables aircraft to literally fly themselves from take-off to touchdown, and some would say that pilots are really only needed when something goes wrong. But the danger of over-reliance on automation was clearly shown by the Asiana Airlines 214 crash in CAVU weather in San Francisco in 2013, as the pilots were incapable of making a visual approach when the ILS system was shut down for repairs. Over-reliance on technology can lead to disasters as well, if pilots have little experience in flying the aircraft manually, when things do go wrong.
The Drone Approach?
Should the industry take automation to the next level and enable ground control of the aircraft in the case of hijackers, or an emergency in the cockpit, using the same mechanisms used to remotely fly drones? This could enable ground personnel to bring the aircraft to a safe landing at a nearby airport and override the cockpit when something goes awry. But this opens up an entirely different set of issues, including cyber-security to prevent a rogue party from remotely taking control of an aircraft with those capabilities. The trade-offs between design choices are not always easy and straightforward. In addition Boeing’s “uninterruptible autopilot system” is worth looking into if for no other reason than when an aircraft deviates from its flight plan, remote control could increase safety.
The Bottom Line
Human error is an element of life, which we all must deal with. But in aviation, where the consequences of a mistake can be fatal, we require multiple layers of safety, including redundant systems. Most accidents result from multiple failures, rather than just one. Each accident raises the question of what could be done differently to prevent something similar from ever happening again. We’ve come a long way in aviation safety in recent years, but the system is fundamentally based on trust – trust that the pilots want to safely arrive as much as their passengers, and will do everything in their power to do so. When that sacred trust is broken, the system can break down. How we best restore that trust will be debated throughout the industry as we learn more about the Germanwings tragedy.