This is a 1995 paper by Nadine Sarter and David Woods from Human Factors The Journal of the Human Factors and Ergonomics Society
How in the World Did We Ever Get into That Mode? Mode Error and Awareness in Supervisory Control In this paper, the authors discuss “mode error”. This is a problem that can occur especially in automation where users end up making mistakes because they’re not aware of what mode the given system is in. That might seem like an easy thing to know or keep track of, but in light of automation increasingly have a large number of modes this is more and more of a problem.
Additionally much automation, including that which was studied in this paper, flight systems in “glass cockpits” (cockpits that have digital displays instead of analog gauges), can put itself into different modes based on things that are not direct user input. For example when reaching a certain altitude, or using other controls, or even by other users. All of these things come together to potentially allow a point where a user doesn’t know what mode therein, does know how they got there, and as a consequence doesn’t have a model from the system will behave.
“New attentional demands are created as the practitioner must keep track of which mode the device is in so as to select the correct inputs when communicating with the automation and to track what the automation is doing, why it is doing it, and what it will do next.”
I like this quote as it sums up a lot of the complexity of automation in one go and really frames our interactions with the automation well. Often times I think in software especially we can think of ourselves as just executing a script or just setting up a process, or just creating a cron job. But this idea that we are actually communicating with the automation and that we do in fact have to place effort into tracking what the automation is doing and try to predict what will do next is really important. Just keeping this in mind when designing automation could provide an improvement.
Increasing amounts of flexibility that technology has also increases the complexity, which leads to increased mode error. The authors note, and I’ve experienced this myself, that this flexibility often gets cited as a feature since it enables the operator to select the best tool for the job. The flexibility comes at a price though. Because of the flexibility, it is now incumbent upon the operator to know when, where, and how to use the different modes.
Don Norman’s quote in the paper gives a succinct way to_cause_mode error:
“change the rules. Let something be done one way in one mode and another way in another mode” Previous research on mode error had been done using word processors. So the mistakes were actively performed, doing tasks yourself with the piece of software or a device that is only reacting to your direct input and commands. Now that automation has increased in complexity it has some notion of autonomy and is event driven (not just input driven like it was before).
But now in “event driven task domains” the way that we as humans supervise the automation is very different than that original research. Many times the automation is supposed to be a resource to give a human supervisor a large number of modes or tools to carry out a number of different tasks under varying circumstances. In these cases, it’s the role of the human to choose the mode that makes the most sense in the situation.
On the surface this can sound easy enough, but in order to do this, the human must know a lot more and now also has more to monitor and more places they need to direct their attention in order to keep track of what mode the automation is in and what it’s doing to manage the system in question.
On top of that these cognitive demands can really ramp up when you consider that some of the systems will change modes without direct user input, perhaps based on something in the environment or for some sort of safety purpose, or indirect input from the user as is the case in certain forms of cockpit automation. Once we throw in this ability of automation to do this stuff it really ramps up the amount of mode awareness that human needs to have.
Preventing mode error
How much we need to maintain mode awareness is very much influenced by the design of the automation and what it can do. No surprise to anyone who usually looked into UX and UI that interface between the person and the automation is crucial here.
“Modes proliferate as designers provide multiple levels of automation and various optional methods for many individual functions. The result is numerous mode indications distributed over multiple displays, each containing just that portion of the mode status data corresponding to a particular system”.
In addition, the designs can allow interaction between various modes. All this comes together to delay the feedback loop, stretching it longer and longer between when a human might give a certain input and for them to be able to see the results of that input. Because of this increasing feedback loop time, it’s even more difficult to know about errors. Finding out about them very late which makes them even harder to recover from them.
The authors use the Indian Airlines flight 605 crash in 1990 as an example that represents this mode error problem writ large. A good example of multiple modes, multiple ways to enter those modes, and some of those ways being not direct human input can coalesce into a disaster.
In this case, the pilot put the plane into a mode called “open descent” on approach without realizing. This was a problem because it changes how airspeed is controlled. And as a result, the flight path had changed. Unfortunately, in this case, it was not discovered until about 10 seconds before impact, much too late for them to have recovered from.
Turns out that there are five different ways of activating open descent mode. Two are able to be selected manually, one relies on the system being a particular state. Another can be chosen by pulling the altitude knob after selecting a lower altitude, or you could do it by pulling speed knob so long as the aircraft is an expedite mode. On top of that, the other three ways that exist are all indirect, they don’t require manual selection. They are all related to having a target altitude or as protections to prevent aircraft from going too fast.
In this case, the pilots had to not only know what mode the system is in but they had to understand what it meant for the system to be in that mode and on top of that, also know other statuses of that automation so that they could encourage or avoid activation of the modes.
Displays and inputs that look the same across modes, but are very different can cause mode error as well. They saw one cockpit where pilots were entering vertical speed or desired flight path angle in the same display. It depended on what mode you are in as to how that input was interpreted.
What’s especially strange about this is that if you were to express it yourself is a pilot, out loud to another human, these two inputs would sound drastically different! You might say a vertical speed of 2500 feet per minute or a flight path angle of 2.5°.
But in this cockpit the input of these two is almost the same process and almost looks exactly the same on their display. With this design, pilots would have to know about that and know the implications of it: that the different modes would change how their input was interpreted and then they would have to know to check what mode they are in, and then to either set or read the value in that context. This is a pretty cognitively demanding task where the design could’ve been a lot more clear, potentially something that may have been read at a glance instead.
Another problem that has occurred is the use of these automated systems being accessible by multiple people. The problem is most of you feel especially familiar to us in software, if you been on an incident bridge or interacted with other widespread automation, anything like that. If multiple people can access a system simultaneously and change them, well then now you need to know not only what mode you put the system in what input you gave, but also potentially about the inputs that other people have provided.
This can be especially troublesome in these really flexible technologies where you and on another responder have f different ways of using the tool. I’ve seen this in small ways very easily. To experience this yourself all you likely need to do is pair program and watch the other person use a shell. Often, how you use it and how someone else does are very different despite the software being the same.
The authors point out that this whole process can be the result of technology centered automation as opposed to human centered automation. I like that distinction especially for us as software practitioners.
Culmination of previous research
They had been looking at this problem using the lens of aviation for a few years prior. They were able to perform a number of studies of how pilots interact with automation. During one of them, they built a data set of the sort of interaction problems they’d encountered. They also observed pilots as were transitioning in training from one type of aircraft to a more automated one. So they were able to see these pilots before they had a chance to adapt to the new system.
Looking at that data set, they saw that difficulties in managing the modes and how the pilots understood the modes and how they interacted with the automation all contributed to a surprise and as a result difficulty in supervising and controlling the system.
They got 20 pilots to fly a mission in a simulator. They set up the simulator so that there were a bunch of different things that have to do and different events that would occur so that they could look at how aware or perhaps unaware a given pilot was of the mode. They would then later ask the pilots after the fact about their knowledge and how they assessed the situation.
Throughout all of this it turned out the studies provided a pretty consistent data around what trouble these pilots were having. Most of what they saw as the pilots had difficulty was directly related to not having modal awareness or awareness of what mode the state of the system is as well as a incomplete or inaccurate mental model of how the different modes would work together.
The automation surprises tended to occur in “nonnormal, time-critical situations”. These are things that didn’t happen that often but when they do time is of the essence. This could be something like the aborting takeoff of or avoiding a collision.
In some cases, like the aborted takeoff, 65% of the pilots were not aware that the automated system was in charge of the thrust and as a result didn’t disengage them so they could have full control of the plane. When they went further and debriefed these pilots, 15% of them were able to describe what mode was active and its settings and the state of the system.
This showed that for the 15% they had the knowledge in their head, it wasn’t that they’d been uneducated, but in the moment they were unable to apply that knowledge. Further, only four of the 20 pilots they talked to were absolutely correct in managing the automation during that aborted takeoff. And it wasn’t that these four happen to be super pilots, one of those said that he was able to do so simply because he was trying to be by the book as much as possible, not that he really understood what was going on the automation.
And so it went through a variety of these different scenarios. It turns out that the more a given pilot had experience with the automation the better they were able to manage the automation and know about the trade-offs in different contexts. Pilots who have less time in that sort of cockpit would typically only have one way of doing something.
This could reflect a way of dealing the complexity of automation by just ignoring some of the options, even if the strategy that they are aware of is perhaps sub-optimal. I think this is interesting because these are trained people in a specialty field, often just like we are in software. And it isn’t the time that they’ve been flying throughout their career that seems to affect this but the experience they had with that specific automation.
This data that they developed on these problems show that there’s two contributing factors:
- “buggy” mental models
- opaque indications of the status and behavior of the automation
Buggy mental models are something that comes from the people designing the systems. They do not take into account the new knowledge that the automation is going to require of the user and a lack of mechanisms designed in that help the people operating it to both initially get and maintain knowledge so that it’s usable in the moment, not just something that they could recite after the fact. These buggy mental models also come from a way of training that doesn’t take into account the need for the operators to explore and experiment and understand better how the systems work and how to work with them.
The authors call this “the problem of opaque, low observability interfaces” and I find that to be something I really relate to. I’ve seen a number of those and have had that experience myself, of not really feeling proficient in a system, but just sticking to my one way.
Some of these things that the authors bring up I think are good things to consider in any of our automation designs, but especially that the status of the automation is itself a sort of data that the operator then has to interpret and maintain an assessment of over time so that they really know what’s going on. That’s something that we can keep in mind when developing our own automation or even monitoring interfaces.