On Saturday, Asiana Airlines flight 214, a Boeing 777, crashed upon landing at San Francisco Airport. Through a combination of luck and robust aircraft design, only 2 people were killed; a surprising outcome given the photos of the wreckage. Unfortunately, many more were injured, a number in serious or critical condition. Intense speculation surrounds the cause of the crash, with theories ranging from pilot error to inoperative runway guidance systems to mechanical failure. What is known so far is that the plane made an abnormal final approach, striking it’s tail against a sea wall that precedes the runway. What’s unclear is why?
The NTSB, the government body in charge of the investigation, has numerous experts on scene. Their process is methodical, sometimes taking years to establish a series of root causes and recommendations following a plane crash. In this case, the NTSB has a number of factors in its favor. The remaining pieces of the plane are immediately observable, unlike crashes that happen at sea. The black boxes have already been recovered. The crash happened in a populated area, with numerous eyewitnesses as well as video. The crew survived the crash and will be able to provide important information about the final fateful minutes of the flight.
When accidents like Asiana 214 happen, people hunger for immediate and simplistic answers. Our need to feel secure fuels a desire for straightforward explanations that bring closure to mysteries. Our culture of real-time social media rapidly turns people into aviation experts, investigative sleuths and conspiracy theorists. Unfortunately, armchair speculation is rarely accurate, and ultimately the contributing factors of a crash can be diverse and complex.
The intense media focus creates a false sense of insecurity, leading many to feel that commercial air travel is risky. In truth, aviation safety efforts have produced remarkable results — commercial air travel is an incredibly safe mode of transportation. Prior to Saturday, the last fatal plane crash on US soil — of a commercial jet of at least regional size — was back in 2006. The last previous fatal domestic crash involving a larger, “transatlantic” class jet — think 737’s and larger — was in 2001. During that timeframe there have been 10’s of millions of safe, uneventful flights. Much of this is due to the disciplined, evidence-based approach to safety of modern aviation.
One of the foundational safety principles of modern aviation is known as the Swiss cheese model. It is a concept that was first described by the cognitive psychologist and researcher James Reason. In his seminal book Human Error, Reason chronicled a number of famous disasters, including Three Mile Island and the Challenger space shuttle accident. But instead of merely reviewing the underlying factors of each incident, he proposed an integrated theory of accident causation. Reason had several profound insights:
- Accidents involving complex systems are often the result of the confluence of multiple contributing factors.
- Contributing factors can occur in a wide range of domains from unsafe acts — such as a pilot approaching a runway at an improper altitude — to organizational errors — such as a culture of fiscal austerity that does not prioritize training activities.
- As opposed to the active errors that occur at the time of an incident many contributing factors are in fact latent errors. These latent errors lie dormant, waiting for an active error to turn them into a trigger for an incident.
- Human beings, lacking unlimited concentration, focus and memory, will always be prone to operational errors. Properly designed systems account for this limitation, expect a level of human error, and ultimately keep these errors from resulting in an actual incident.
Reason summarized his integrated theory of accident causation with an excellent visual known as the Swiss cheese model.
Let’s consider the model in the context of an investigation into a a crash landing such as Asiana flight 214. As a disclaimer, this example is not intended to represent a factual analysis of this tragic event. It is simply an example of how the Swiss cheese model can be used by investigators to gain a deeper perspective of the root cause of an accident. Instead of merely focusing on the immediate visible possibilities (e.g. pilot error), the Swiss cheese model forces investigators to look at the latent failures lurking deep within the organizational body.
In the case of a crash landing, examples of unsafe acts could include items such as:
- Incomplete use of mandatory checklists
- Insufficient intra-crew or crew-to-tower communications
Stepping back a level is a layer of failures known as preconditions for unsafe acts. A classic example in a plane crash would be fatigue, as when pilots on a long flight have had insufficient sleep. The next layer, unsafe supervision could be represented by the following examples:
- Insufficient training
- Incorrect pairing of flight personnel — for example, two junior pilots
At the deepest background level are organizational influences. As an example, an organization that has a strong focus on growth may not be as invested in extensive training programs for new personnel. An airline undergoing margin pressures may be disinclined to make investments in state-of-the-art safety programs.
Reason’s profound contribution was the idea that an unsafe act was simply the hole in the final layer of cheese that allowed the ultimate accident. This unsafe act was unlikely to happen without a series of previous failures lying dormant in the background. Prior to Reason, the predominant focus of an accident investigation was on the operators (e.g. pilots, nuclear plant technicians) themselves. They were easily attached to the accident events. The simple answer was to fire, discipline or train the offending employees. Reason argued that doing so would not solve the deeper organizational issues that led to the problem.
Since the publication of Human Error, Reason’s Swiss cheese model has been adopted by a number of high-risk industries. Along with commercial aviation, it has become a key source of guidance in hospitals and nuclear power plants. Unfortunately, outside of these high-profile industries — where human life is on the line — the Swiss cheese model is relatively unknown. In my field of information technology, few professionals are aware of its existence. This leads to inadequate error investigation protocols that frequently focus on the operator.
Our inherent human nature has us seeking simple answers when tragedy strikes. In the case of commercial “accidents” — whether they are plane crashes or computer failures — its easiest to find causes, and direct blame, at the actual site of the incident. Progressive organizations will take the Swiss cheese model to heart and adopt a more holistic approach to accident prevention and investigation. They will recognize that culture, organization, and process design are all needed to provide adequate defensive layers for inevitable human errors.