Apparently, it was a simple mistake that caused the false missile alert to be sent to phones across the state of Hawaii. And while citizens were still recovering from the disorienting news, media outlets were quick to place blame.
The Verge criticized the lack of the ability to send a follow-up message. TechCrunch cited poor UI (user-interface) design. The New York Times sourced “the logic of nuclear deterrence” as the underlying problem. The list goes on…
Add to all that the confusion of not knowing what was actually happening. Choosing a single scapegoat after the fact is a common mistake that’s made when identifying the source of complex problems. The issue with this approach is that it takes complicated issues and reduces them to a single point of failure.
We do this because it’s the path of least resistance. If we can quickly point our finger at defect and say “right there, that’s the only problem” then we can feel satisfied with our efforts to resolve the issue.
While it’s gratifying to point to one source of failure and move on, complex issues deserve more than a reductionist conclusion. The facts are that:
When we see one issue reduced to so many different root causes, it’s difficult to choose a singular point of failure.
The reality is that multiple things went wrong that day. Individually, they might have been inconsequential, but their culmination had a major impact.
What can we, in the manufacturing industry, learn about this bundle of mistakes? The New York Times put it best in their analysis of the situation, “...scholars say their gravest dangers come from the uncertainty they (nuclear weapons) create and the fallibility of human operators, who must read every signal perfectly for mutual deterrence to hold.”
"The underlying risk in any process is the uncertainty created by the fallibility of human operators."
Granted, this statement was also intended to be a critique of the software’s complexity, but it also unearths a key insight. The underlying risk in any process is the uncertainty created by the fallibility of human operators. The false missile mishap highlighted that human error is more prevalent than we’d like to believe.
This creates uncertainty. And nobody likes uncertainty.
It’s unsettling to look at a process and say, “you will never be perfect.” Particularly if your industry lives and breathes process control. Processes will never be perfect because no matter how advanced or automated they become, there will always be risk in human fallibility.
"Blaming humans for making mistakes is like blaming matches for starting fires."
Blaming humans for making mistakes is like blaming matches for starting fires. As unsettling as this reality is, there is a silver lining in the “certainty of uncertainty.”
The Standard Work cycle of identifying inefficiencies, implementing the improvements, and evaluating the results, sends a powerful message to the people involved. This cycle helps communicate that everyone is responsible for reducing risk and eliminating inefficiencies.
With the reality of human error, mitigating risk is the closest we can come to perfection. Standard Work is effective because it lets us evaluate processes objectively, using that foundation to create a cycle of analysis and improvement.
Ours is a tool that helps you standardize, train, and inform employees of changes, analyze the results, and continue improving. Taiichi Ohno, the father of Lean Manufacturing, famously said that “Without standards, there can be no Kaizen.”
At Dozuki, we believe that Standard Work is the key to operational success—in fact we’ve seen it work for hundreds of companies. The emergency missile alert on January 13th may not have been real, but the lessons we can learn from it certainly are.