Team Play with a Powerful and Independent Agent: A Full-Mission Simulation Study

Make sure to checkout the link at the bottom for Chaos Conf next week!


Team Play with a Powerful and Independent Agent: A Full-Mission Simulation Study

This week we have a continuation of the series we’ve been followingfrom Nadine Sarter and David Woods. In this issue we’ll be examining how automation can create new problems, especially if the operator is expected to keep track of multiple modes with little support.

While this paper focuses on the A320 aircraft specifically, the authors are very clear that this is not an issue with this one aircraft or even just aviation in general, but is a problem that can occur when automation doesn’t act as a team player.

Automation acts as more of a “team player” when it acts in a way that allows humans to keep track of what it’s doing and anticipate what it will do in the future. The authors even suggest that this measure, how much of a team player a given technology is, can even help predict how effective and successful that it is.

Conversely, when automation isn’t a team player, automation can be clumsy and confusing. This particularly true in this case where there is the possibility of “mode error,” where in one mode some input is ok, but in some other mode unexpected actions occur. This puts a burden on the operator to continue to exercise “mode awareness” to keep track of what state the automation is in. If this mode awareness is lost, then mode error can occur.

I think this is important for us to keep in mind as we develop tools and systems for others. How often are we expecting users to keep track of what’s going on without giving them more than subtle clues or perhaps nothing at all?

And keep in mind, that this study was done with experienced pilots. The authors even setup the study on the simulator so it included some “standard proficiency actions” which would have helped reveal if it was simply an issue of inexperience. But none of the pilots struggled in that area. I think its common to see some missteps with our technology especially if we’re familiar with it or even have built it, to say that many things that aren’t outright bugs are “training issues”. But I think this study and this question of how the automation behaves toward them human team and its ability to integrate can help step out of this mindset.

The 18 experienced pilots were asked to fly 90 minutes on a simulator and were informed about the experiment and had an instructor with them who acted as the captain. Also, pilots were interviewed afterwards and had a chance to ask questions about the experience as well. Aside from the “standard” bits, the authors explain:

All other scenario events and tasks were designed to test pilots' awareness of the status and behavior of the automation. The events involved a high potential for surprise arising from a mismatch between actual system behavior and pilots' likely expectations, which are known to guide their system monitoring

I think there’s a temptation to say well, sure, of course if you pick tricky scenarios some pilots will be tricked. But I think that misses the point that the authors didn’t change the system, they used a high fidelity A320 simulator. They were able to create these tricky situations because of the way the system exists today, they didn’t create the system.

I won’t get into all the different ways the flight automation behaved but one really stood out for me. In one of the scenarios the pilots were asked to use the automation after getting clearance for a landing at a runway, “24L”. Air traffic control gave them clearance and asked them to stay within certain altitude constraints, something which they can program in the flight computer. After doing so, they were informed that they should actually land at runway “24R”. Doesn’t seem like it should be a big change right? But:

When the pilot changed the runway identifier in the MCDU [Multifunction Control and Display Unit] to 24 R, the automation also erased all previously entered altitude constraints, even though they still applied.

Not only did it erase that information, it only communicated this erasure by the absence of two icons that were there previously. 4 pilots never noticed this and another 4 were only able to detect this by looking at a different map display. 10 of the pilots were able to notice almost immediately or even anticipate the issue. But even then of the 14, only 12 were able to recover in time by entering the data again. So not only was it hard to detect it was hard to correct!

Another scenario involved a “go around,” where a pilot would have to abort a landing and try again, but do so at less than 100 feet of the ground after turning off the flight director. This is something that is rare, but especially challenging during a fast paced event. The automation doesn’t make this any easier. It turns out this is the only situation where doing the normal procedure does not automatically arm the autothrust system. In this case, pushing the thrust lever as normal to the go around position actually disconnects the autothrust system, making the pilot have to control it manually.

As shocking as this can be to read, especially if we don’t have experience in this domain, it gives us a chance to think about our own systems. How often are we doing similar things on our own automation? How can we better make our automation behave as a team player? Often automation is clumsy and gets in the way, and these tense, high tempo situations are precisely where it doesn’t help, when the support is needed most.

Takeaways

  • As technologies and automation advances new types of errors can be created, even while other problems are fixed
  • This means we cannot count on “better” technology fixing the issues that may exist in a given system.
  • Without some way for operators to understand the current status of some automation and anticipate it’s future state, mode error (where input that would work in one mode causes different behavior in another) becomes likely.
  • Mode error can occur when inputs are valid in one mode, but cause different behavior in another.
  • Automation and advancement in technology can create new forms of failure, even while addressing to others.
  • The more that automation behaves as a team player, the more this sort of error can be avoided.
← Building and revising adaptive capacity sharing for technical incident response: A case of resilience engineering
The role of error in organizing behaviour →

Subscribe to Resilience Roundup

Subscribe to the newsletter.