Resilience Roundup - Those found responsible have been sacked - Issue #37

Thanks to everyone who took the time to come say hi at Monitorama! It was my first one and I see why people enjoy it so much, I learned a lot!

P.S. I’ll be speaking at SRECon APAC in Singapore this week, if you’re around, please come say hi!”


Those found responsible have been sacked: some observations on the usefulness of error

This is a paper by Richard Cook and Christopher Nemeth where they discuss the purposes of the notion of “error” as a category of human performance, especially in light of Rasmussen and Hollnagel staunch criticism of it.

As Hollnagel puts it:

“Inventing separate mechanisms for every single kind of ‘Human Error’ may be great fun, but is not very sensible from a scientific point of view

They use a case study to demonstrate the point.

The case study is from a hospital where a special table, designed to hold patients for spinal surgery, rotated too far during operation such that a patient fell from it.

I found the case study useful not just to demonstrate this idea, but also gives insight into how investigations can be conducted.

The investigation

So after the patient falls, though they were not injured, an investigation was launched.

Four people arrived and proceeded to ask the surgical resident to demonstrate and describe what happened, while they are recording audio. The authors don’t delve into exactly what they asked, but they do say that they continue to ask questions to establish context.

Upon examining the table it becomes clear that there are multiple factors at work:

  1. The table can rotate freely, by design, so that spinal surgeries can be performed, but as shown in this case it can also allow a patient to fall.
  2. An operating team as a whole was assigned to prepare the patient as a group, but there was no one person assigned to operating the table itself.
  3. Also, there were multiple controls, a lever to tighten or loosen the how much the table could swing, a switch to lock the foot, and some control lights.
  4. There were a lot of labels on it on how to use and not to use the device. In addition they found conflicting cues after spending a lot of time trying to understand them. The controls and the displays are so complex and ambiguous that they say it is “impractical for any clinician to understand”.
  5. Finally, locking the table keeps it from swinging, but it takes a lot of force to lock it. It is perhaps more force than some people can actually apply or maybe more than they would want to apply to an expensive medical device without knowing that that was the correct thing to do.

At this point the investigation team reports their findings.

How the organization responds

“How they dealt with it tells us about the manner in which organizations respond to adversity”

This is perhaps the most telling part of the whole story. Accidents and incidents of course can happen anywhere. But how individual organizations respond tells you a lot about how they view learning, improvement, and error.

The organization sets up a “root cause analysis” meetings with lawyers, risk managers, the investigation team, equipment techs, the investigators, etc…

The surgeon has spoken to the president of the company that makes the tables, who says they’re surprised by the accident, but the FDA database shows other incidents like this.

After the investigation team presents their findings, solutions begin to be generated.

It turns out that the features of the table are seen as unique and important for the surgical team, they want to keep it. The hospital won’t fix it either. They’re bound by rules around FDA certification of medical devices and warranties and such.

Suggestions for fixing it include more warning signs, more training, restricting the usage of the table to those who have been trained. Many things that seem like they’d position practitioners in a place to be blamed.

The hospital wrote a report and sent it the manufacturer and and made an “improvement plan” that contained suggests for more training of staff and to hang a sign the investigation team made over the controls.

Ultimately though, nothing changed, not even the warnings got used.

Why is error useful?

So why is this idea of human error useful, especially if nothing changed?

For organizations error can be a utility that provides things like:

  • A mechanism to distance people from the implications of failure at work
    • This can be a problem since if we chalk up an error to some quality about the person, e.g. they were lazy. It can give other practitioners the idea that they are not subject to the same things, because of course, they are not lazy.
  • Helping the organization potentially reduce liability if the accident is attributed to a single person’s failure

But for the researchers (and those of us who benefit as a result), error in this sense actually does become useful. It’s an obvious way for us to see which investigations were simply terminated early. It gives a signal that something interesting may be at work.

Takeaways

  • While human error isn’t a category of human performance, its useful to organizations
    • Just because it is useful though, does not mean it isn’t harmful.
      • Using the notion of error to distance individuals or organizations from an outcome can be misleading. It can give the idea that others are not vulnerable to doing the same things, though they are
  • Error can be useful to us though since it calls out investigations that were ended early

Don't miss out on the next issue!