Reliability. This sounds like a decent trait. Who wouldn’t want to be described as “reliable”? It sounds reputable whether you’re a person, a car, or a dishwasher. So how does one become or emulate the trait of being reliable, one who is predictable, punctua—“reproducible,” if you will?
Organizational reliability has received a fair bit of press these days. The industries that have come to embrace reliability concepts are those in which failure is easy to come by, and those in which failure is likely to be catastrophic if it occurs. In the medical industry, failure occurs to people, not widgets or machines, so by definition it tends to be catastrophic. These failures generally come in three flavors:
- The expected fails to occur (i.e. a patient with pneumonia does not receive their antibiotics on time);
- The unexpected occurs (i.e. a patient falls and breaks their hip); or
- The unexpected was not previously thought of (i.e. low-risk patient has a PEA arrest).
A fair bit of research has been done on how organizations can become more reliable. In their book “Managing the Unexpected: Assuring High Performance in an Age of Complexity,”1 Karl Weick and Kathleen Sutcliffe studied firefighting, workers on aircraft carriers, and nuclear power plant employees. They all have in common the fundamental similarity that failure in their workplace is catastrophically dangerous, and that they must continuously strive to reduce the risk and/or mitigate effectively. The Agency for Healthcare Research and Quality (AHRQ) specifically studied, through case studies and site visits, how some healthcare organizations have achieved some success in the different domains of reliability.2
What both studies found is that there are five prerequisites that, if done well, lead to an organizational “state of mindfulness.” What they and others have found in their research of highly reliable organizations (HROs) is not that they have failure-free operations, but that they continuously and “mindfully” think about how to be failure-free. Inattention and complacency are the biggest threats to reliability.
The Fundamentals
The first prerequisite is sensitivity to operations. This refers to actively seeking information on how things actually are working, instead of how they are supposed to be working. It is being acutely aware of all operations, including the smallest details: Does the patient have an armband on? Is the nurse washing their hands? Is the whiteboard information correct? Is the bed alarm enabled? It is the state of mind when everyone knows how things should work, look, feel, sound, and can recognize when something is out of bounds.
The next prerequisite is a preoccupation with failure. This refers to a system in which failure and near-misses are completely transparent, and openly and honestly discussed (without inciting individual blame or punitive action), and learned from communally. This “group thought” continually reaffirms the fact that systems, and everyone in them, are completely fallible to errors. It is the complete opposite of inattention and complacency. It is continuously asking “What can go wrong, how can it go wrong, when will it go wrong, and how can I stop it?”
The next prerequisite is reluctance to oversimplify. This does not imply that simplicity is bad, but that oversimplicity is lethal. It forces people and organizations to avoid shortcuts and to not rely on simplistic explanations for situations that need to be complicated. Think of this as making a complicated soufflé; if you leave out a step or an ingredient, the product will be far from a soufflé.