When mental models go wrong. Co-occurrences in dynamic, critical systems

Welcome to the last issue of 2019! I’ll be spending the next couple of weeks getting Resilience Roundup ready to keep bringing you analysis in 2020.

Want to make a special request? Have something you want everyone to see? There’s still a small number of slats remaining to sponsor in 2020. Hit reply and let me know!

When mental models go wrong. Co-occurrences in dynamic, critical systems

This is a paper by Dennis Bernard, David Greathead, and Gordon Baxter published in the International Journal of Human Computer Studies.

They explain what can happen when two things, not necessarily related, happen at the same time. They lead with the example of a coworker who was giving a presentation and was having trouble with the projector and their VGA cable. The video would go out and they would move them mouse, when that didn’t bring back they video they’d tighten the cable and the video would return. This went through a few cycles of ensuring the cable was in proper placement, before they realized that the issue was simply that the machine was going to sleep and was slow to wake from it. This meant that any action they would’ve taken to wake it would’ve appeared to work.

The same thing can happen with a mental model of a system. We can mistakenly think that our mental model is correct or accurate when two things happen that may not be related at all, but happen one after the other.

Mental models get talked about a lot and the authors are clear to define mental models to them as “to be scarce, goal driven images of the world that are built to understand the current and future states of the situation.”

Because mental models are created to complete a particular goal, they may not have as many content resources brought to bear in the creation, so they are incomplete and do not represent everything in a system.

Mental models are built from two main things:

  1. Whatever knowledge is needed to accomplish a given task.
  2. Some information from the environment.

As time goes on or something like an emergency happens the model becomes more simple and based on correlations of behavior between parts of the system. When this happens it means that only the most consistent were obvious indicators to system states may be taken into account.

Of course, it’s not the people going around consciously choosing to have incomplete mental models. They’re not saying to themselves “nah, I won’t bother.” As the authors say:

“people tend to satisfice rather than optimize, settling on a solution that is deemed good enough even though it may be suboptimal.”

Eventually, flaws in mental models are revealed when some interaction in the environment causes surprise. The authors are quick to point out that not all flaws in mental models cause accidents, as humans are able to contact errors and compensate for them. The main weakness in mental models, they tell us, is that the bar is low for accepting them. If there is enough information coming at us from the environment that is consistent with what we expect then we will most likely treat that model has continuing to be correct.

Confirmation bias, where humans have a tendency to not look for evidence that contradicts their current understanding, but instead wait for information that is consistent with it to appear can contribute to overlooking data that would allow a model to be corrected.

The authors use the Kegworth aircraft accident as an example of cooccurrence leading to disaster. This was an incident where an engine failed and was causing smoke to enter the cabin. It was difficult to identify which engine was problematic, and upon shutting off the incorrect engine, the smoke abated for a time. Further, upon trying to make sure the correct action was taken, the pilots were interrupted by other activities. Unfortunately, by the time it was noticed that the engine needs to be restarted, was too late to do so.

The authors give some general advice on how to deal with this including training operators to be aware of human factors concerns. They explained that they believe that more and more pilots to receive some sort of education around human factors early will contribute more and more to the dependability of critical systems. They, like others such as Rasmussen and Hollnagel, advocate for automations to be at least somewhat aware of operators if not some sort of intelligent system.

I believe that all of you readers who are operating complex systems are contributing to that dependability in your respective domains as well, taking the time to learn along with me here and studying elsewhere.


  • When two things co-occur, whether or not they are related, we are prone to think that they are, which can reinforce our mental model inaccurately.
  • Mental models are built from incomplete information.
    • Typically the model is built from some data from the environment, and enough knowledge to accomplish a particular goal.
  • Models are then revised when some sort of surprise occurs that tells us that the model and reality no longer match.
  • Confirmation bias makes it more likely that conflicting information will be dismissed considered integrated.
← Macrocognition
A Sensemaking Lens on Reliability →

Subscribe to Resilience Roundup

Subscribe to the newsletter.