DBEA55AED16C0C92252A6554BC1553B2 Clicky DBEA55AED16C0C92252A6554BC1553B2 Clicky
April 20, 2024
Care to share?

Last week, Harry Nelson, former Vice President of Airbus flight test department, warned about pilots relying too much on automation, and that airlines need to better train their flight crews who may have become complacent and may not be capable of adequately manually flying the aircraft should automation fail.

The crash of Asiana Airlines flight 214 in San Francisco in 2013 was a clear example of a flight crew that lost competence through over-reliance on automation.  Asiana’s standard procedures for a pilot are to switch on autopilot shortly after takeoff, and utilize the auto landing systems coupled to airport instrument landing systems at airports for landings.  The day of the crash, the ILS was under repair at SFO, but since the weather was clear, visual landings could easily be undertaken.  Every other flight that day had no problems, but the Asiana crew, less accustomed to manually landing the aircraft, miscalculated their approach and crashed near the end of the runway, resulting in a three fatalities, multiple injuries, and a written off aircraft, all due to a lack of experience in basic flying skills.

Automation and Flying Skills

The A320 was designed, from its outset, differently from the Boeing 737, and presented an advance in cockpit automation.  With fly-by-wire controls, Airbus could introduce software to preclude certain “edge of the envelope” situations from occurring, easing the burden for flight training, as the aircraft was easier to fly – assuming all of the automation systems continue to operate properly.  But that doesn’t always happen.

Four months ago, a Lufthansa A321 rapidly descended without command from cruise altitude at 4,000 feet per minute until pilots could re-gain control on the aircraft. EASA, the European equivalent to the FAA, issued an Airworthiness Directive to modify the aircraft flight manual, as it traced the fault to two angle of attack sensors that failed, resulting in the “alpha protection” software for the airplane coming into play.  “Alpha protection” software is built into every Airbus aircraft to prevent the aircraft from exceeding its performance limits in angle of attack, and kicks in automatically when those limits are reached.

The larger question is whether simply changing the flight manual, rather than the “alpha protection” software itself, was the most appropriate response to the situation of a potential sensor failure.

Could the crash earlier this year of an Air Asia A320 in Indonesia after an apparent attempt to climb during a thunderstorm  also relate to software?  The relevant facts are still not in, and we may never know.  With an increase in angle of attack and a potential aerodynamic stall as the result of a climb in turbulent weather, one would expect the “alpha protection” software to kick-in if an aerodynamic stall was imminent.  The aircraft did crash, and while we don’t yet know why, we do know that the margin for error at cruise speeds at high altitude are quite thin, and even thinner during turbulence.

A computer controlling an aircraft is only as good as the data it receives, and as the sensor failures on the Lufthansa aircraft illustrated that the old computer adage “garbage-in, garbage-out” still applies.  In that case, the garbage in was a sensor reading too high an angle of attack, which resulted in the computer pointing the aircraft nose down into a dive several times a normal rate of descent.  The experience of that flight crew enabled them to overcome the software commanded descent and re-gain control of the aircraft.  But as we’ve seen with the Asiana crash in San Francisco, not all crews are adept in manually flying an aircraft.

The aircraft industry has moved to fly-by-wire and computer controls, because they are lighter in weight and equally as effective as mechanical devices.  They may even be, from a maintenance standpoint, superior, in that electrical components rarely wear out, while mechanical components do.  These components have become dependable and reliable.

The issue that comes into play is what happens when something goes wrong.  The A320 began life on the wrong foot, as its software caused a crash of a prototype at an air show in Habsheim France before the aircraft entered service. The first version of the “alpha protection” software meant to prevent crashes assumed the low pass of the aircraft (under 50 ft. on the radar altimeter) signaled an attempted landing and brought the aircraft down into a forest, despite the pilot’s efforts not to land during the demonstration.  While this flaw was clearly fixed before the aircraft was certified, it illustrated the dangers of overly-ambitious software to protect against an inattentive pilot.

When everything is working correctly, it is easy for an aircraft to fly on an automated basis.  But the reason we have two highly trained individuals in the front seats is to utilize their judgment and experience to offer the best chance at recovering from a potentially life threatening situation when something goes wrong.  And, unfortunately, things do go wrong, albeit quite rarely.  But when they do there is little room for error or vacillation in decision-making.

Several years ago Air France flight AF447 had a pitot tube freeze at altitude in a thunderstorm in the South Atlantic off the coast of Brazil that caused the airspeed indicator to report erratic speeds.  An autopilot and the Airbus “alpha protection” software would react to that data as it was programmed to do, likely pushing the aircraft nose down if the speed read too low, or pulling the nose up if the speed read too high.  But at cruise altitudes in turbulent weather, it doesn’t take much of a correction to put an airplane in jeopardy, and into an aerodynamic stall that could result in the loss of control of the aircraft.  In this case, the pilots did not react in time, likely mesmerized by the myriad of warning messages appearing from the automated systems failing, failed to push the nose down and manually fly the aircraft, and the aircraft was lost.

The question that must be asked is whether there should be a kill switch for the “alpha protection” system to enable the pilots to directly fly the airplane in a direct “analog to digital” mode, with the aircraft reacting directly to how the pilots operate their controls.  Both Boeing, in its 777 and 787, and Bombardier in its CSeries provide that option for their fly-by-wire systems.  Airbus does not, and uses the same “alpha protection” across all its product lines as the standard operating mode.  The choice fundamentally comes down to who do you trust – the pilot or the computer.

As an experienced pilot with aerobatic training, I fully understand the limitations of airplane performance and edge of the envelope maneuvers.  I took a course in aerobatics to learn how to control an aircraft in any attitude, and learn how to safely get out of trouble in the event of an unusual situation.  As a result, I understand that in certain situations, such as an aerobatic tail slide, crossing controls in ways one would never contemplate in normal flight are required to maintain control of an aircraft in an unusual attitude.  In an emergency, if similar maneuvers were needed, they would likely be over-ridden by computers in an Airbus, but likely allowed in a Boeing or Bombardier in “manual” mode.

The “alpha protection” system is a nice idea.  But truly effective and intelligent software would be continually evaluating multiple pieces of data to determine whether it might be receiving false data from a faulty sensor prior to plunging an aircraft downward into a nose dive.  If one is flying straight and level, at the same speed and altitude and throttle setting, yet the sensor shows a change in angle of attack, the result would mathematically and physically impossible, and needs to be checked.  An upward change in angle of attack would have the aircraft climbing, and speed dropping unless additional throttle was added.  One would think that advanced software would continually check the logic of multiple readings to ensure that sensors and instruments were consistent with each other and the laws of physics, and alert the pilot when an anomaly occurs.  But apparently, there is a shortcoming in Airbus’ decision logic within the software itself, as demonstrated by the recent Airworthiness Directive that warned pilots of the issue in the manual, but did not address the fundamental problem in the coding.

Compounding the issue today are cyber-security issues that transcend logic flaws, and could open the aircraft to malware or malicious attacks. Physical as well as software protections are needed.  The ideal answer to the problem may require a redesign and new software standards for avionics, flight management systems, and fly-by-wire controls. To effectively accomplish that, we’ll need physical separation of sat-com links with physical as well as software firewalls, and likely a next generation computer language that can be firmly secured against attack, can leverage advanced communication technologies, is productive to program with, and can incorporates advanced decision-logic and intelligence that can detect data or communication faults prior to issuing commands to aircraft systems.  Such technology is within reach today, and the industry needs a massive upgrade of its capabilities to design more effective aircraft control systems and avionics, and mitigate risks.

+ posts

6 thoughts on “Pilot Training, “Alpha Protection” Software and Fly By Wire rules

  1. I like your idea on a more sophisticated, self-checking flight control system that can identify and filter some erratic sensor readings. If an error-type probability can be decreased from once in 200 million flight hours to once in 500 million flight hours, it would be very positive (example figures). The Lufthansa A321 incident of multiple simultaneous senor malfunctions is extraordinarily infrequent, but once in every number of years similar things will happen.

    There is a “kill switch”, that puts the flight control system into alternate law. After the Lufthansa A321 uncommanded dive incident, EASA put out an Emergency Airworthiness Directive (AD No. 2014-0266-E) that details how to go to alternate law by turning off two ADRs. By contrast, in the RAF A330 Voyager uncommanded dive incident the very first response of the pilot was to do this, however it turned out that the failure of the pilot to turn off ADRs saved himself and everyone onboard as the uncommanded dive was caused by cockpit debris (a digital camcorder) and the envelope protection preserved structural integrity of the airframe.

    A partial solution to the problem of faulty air speed readings was instituted on the Boeing 787 by feeding the flight control system with GPS data that is independent of the pitot-static system. A weakness of the pitot-static system is that its pressure sensors can get blocked by ice, wasps, soot, paint, or whatnot. This partial solution decreases probability of, but does not completely eradicate, faulty air speed readings.

  2. The description of the Habsheim crash is clearly wrong:
    1.) The A320 was already certified and in service by Air France
    2.) There is and was no such automatic landing detection function in the A320.
    3.) The pilot intended to show the the maximum angle of attack for the air show, therefore the engines were running with lowest speed. The Alpha-Max protection of the A320 enabled it to keep the plane flying without stall. But the plane was lower than the Pilot thought because he relied only on the barometric altimeter and not on the radar altimeter.
    As the pilot recognised, that he was to low to pass the forest he set full thrust, but it was to late. The Alpha-Max protection lowered the nose to prevent a stall (immediate falling down of the plane) and the plane flew stable into the forest.

    So far this are facts were the accident investigators and the pilot agreed. But there is a conspiracy theory, that the investigators manipulated the data of the flight data recorder by changing the time when the pilot applied full thrust. The pilot claims that he applied full thrust early enough but there was no reaction from the plane. The data from the flight data recorder show a time of about 5 seconds from applying the thrust lever until the engine ran up to full thrust which is normal behaviour of an engine.

  3. Your suggestion that the software check that the aircraft’s response to an issued command is logical caused me to have a “don’t they already do this?” moment. If a piece of software commands a nose-down in order to reduce the angle of attack, and the angle of attack does not decrease, then the only option for the software is to say “the assumptions behind my programming have been violated and you, the pilot, will need to sort this out.”

  4. After a career in software development, it appears to me that virtually all the crashes of the past 10 years could have been prevented by smarter autopilot software. Autopilots should be programmed to identify and correctly handle any sensor failure(s) which could potentially be handled by a human pilot. Additional sensors can be added, and cross checking with other sensors implemented, to allow an autopilot to determine which sensors are failing, and ignore them. Worldwide terrain databases are available to allow prevention of CFIT incidents like the Asiana SFO and recent Germanwings crashes.

    Expecting pilots to always react correctly to an unexpected problem in the short time available before it is too late is expecting the impossible, as events have shown. Improving autopilot software is very do-able to reduce or eliminate the need to rely on the variable skills of the thousands of pilots out there.

  5. I believe you are right, and I think that’s the direction the development will go. However, I still think pilot training is awfully important. Pilots need to possess a well-developed understanding of both the aerodynamics and the flight control system software of the planes they pilot. “What does this piece of software do for me, exactly?”, is a question pilots ought to be able to answer. “Which sets of control laws are available to me, and how do I switch between them?”

    The Asiana SFO accident (OZ214), in which the pilots were apparently unable to perform a visual approach landing, is pretty darn damning. And the AF447, in which a co-pilot provided maximum elevator inputs for several minutes continuously while hearing stall warnings, suggests rather severe flaws in the training regimen, to put it mildly. There is room for improvement.

  6. I found that achieving the quality of “naturalness” in software design was both difficult and important. Having too many modes and too much complexity in autopilot software calls for too much skill by pilots in high stress and confusing environments. In the case of AF447, the stall warning apparently stopped at very high angles of attack, then re-started when the AOA dropped – which was fatally confusing to pilots struggling to understand what was going on. Autopilot software should “know” what to do in any foreseeable situation and not rely on levels of skill and training which are not always going to be available in critical situations.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.