Recently it was reported that an ANA 737 en route to Tokyo nearly flipped over when a pilot pressed the wrong button on the control panel. According to ANA, the pilot was attempting to turn a knob that unlocks the cockpit door and mistakenly turned a knob that controls rudder trim. This photo, courtesy of Gizmodo, purports to show the placement of the two controls.
Reaction from the media as well as the public has been unsurprising. The event has been described as “pilot error”, with a rush to blame this individual as incompetent. While I’m not an expert on commercial aviation and don’t have full details of this incident, it appears to be a disturbing failure of human interface design and mistake proofing.
Human error is a serious and pervasive problem, affecting such diverse crafts as data center management, transportation and medicine. Traditionally, there has been a common tendency to react to human errors by focusing on the individual who committed the mistake. Management would typically rely on one of the following methods to “ensure” that there was no repeated occurrence of the error:
- Caution – The individual would be advised to be “more careful”
- Train – The individual would review procedures or practice the operation
- Blame – The individual would be disciplined and possibly terminated
While each of these responses could provide some potential benefit, they do not address the core issue. Human beings are fallible, with limited attention spans, imperfect memory and finite energy. Despite our best efforts, given enough opportunities, we will repeat the mistake. When lives are at stake, even high levels of accuracy (e.g. 99.999%) are not sufficient.
As an engineer at Toyota, Shigeo Shingo developed a system of error proofing that he called Poka-yoke. He popularized his system in a 1986 book called Zero Quality Control: Source Inspection and the Poka Yoke System. Shingo’s mistake proofing theory holds that humans will always make errors, but the system must prevent these errors from impacting customers. Error proof systems are specifically designed so that it is physically impossible for the human error to become customer impacting. Shingo’s profound finding was that firms should focus on process design to eliminate faults that inevitably would arise from fallible humans. He also advocated for line workers themselves to recommend and help implement Poke Yoke techniques. Shingo understood that the worker involved in the process could best understand how to bullet proof it.
A simple everyday example of mistake proofing is a 3 prong outlet. Because of the unique shape of each prong, the plug can only be inserted properly. Another example is the safeguards put in place for diesel and regular gasoline filler nozzles. The diesel nozzles have a larger diameter and will not fit into the opening of a standard car. Unfortunately, the same can not be said for the reverse! Another example of mistake proofing are the emergency power off buttons that are required in data centers by fire codes.These buttons have a hood on them that needs to be lifted before the button is pushed. This prevents someone inadvertantly leaning against the button. It requires a thoughtful and purposeful action to shut down the data center’s power.
Related to mistake proofing is the discipline of Human Factors Engineering (HFE). HFE looks at how human capabilities and limitations should influence the design of products and processes. In the case of critical controls (e.g. commercial aviation flight deck), HFE would provide best practices for decreasing the likelihood that controls would be used improperly due to confusion, stress, tiredness or lack of experience.
Again, while I am not a pilot, I could envision the following HFE recommendations for our airplane flipping story:
- Critical controls that pilot the aircraft should not be commingled with non-critical controls
- Controls that provide different functions should have a different shape, different operation (e.g. twisting vs. pushing) and different coloring or lighting
- Controls that can be disruptive should provide some feedback as they are used. For example, as the control is twisted to a point that it could flip the plane, it becomes harder to twist or provides a vibration or audible tone. This feedback would serve as an instant warning and reminder of the seriousness of the action.
To now add Poka-yoke thinking to our airplane problem, the rudder trim control that was implicated should not be able to flip the plane. Perhaps a monitoring mechanism would prevent the pilot from moving the control to a point that was putting the plane in danger of flipping.
So, how can forward thinking companies use principles from HFE and Poka-yoke to minimize the chance of critical operational errors? It starts with an appreciation and understanding of basic HFE and Poka Yoke principles by management, process designers and operational personnel. There needs to be an accepted culture (driven by senior management) that serious faults will be avoided predominantly through improved design. While training is an important element of operational reliability, punishing individuals that make human errors is counterproductive and does not enhance long term reliability. Ultimately, systems that prevent damaging human errors are the only way to have highly reliable processes.