The 'problem' with automation: inappropriate feedback and interaction, not 'over-automation'

Welcome back to another week of Resilience.If you didn’t see last weeks issue, I apologize, I’ve resolved the issue with the mailing software.

This week we’ll be talking about a paper suggested by Dr. Richard Cook as well as featuring some videos from REdeploy . Thanks for all your suggestions and background information Dr Cook.

Estimated reading time: ~5 minutes (plus videos!)


Last week’s issue:
If you missed last weeks issue you can read it here

The ‘problem’ with automation: inappropriate feedback and interaction, not ‘over-automation’.
As Dr. Cook points out this is also from a Royal Society discussion where last week’s Rasmussen article also appeared, which was Human Factors in Hazardous Situations.

You may know Dr. Don Norman as part of the Three Mile Island accident investigation team or as the author of The Design of Everyday Things.

Here he talks about problems that exist with automation and shoots down the argument that “over automation” or automation in too many places it the problem.

Norman goes on to explain that the real issue is how “smart” the automation is. He explains that the pain we currently feel is not due to the pervasiveness of it, but that it needs to be either smarter or dumber. This middle ground we’re stuck in is part of the problem.

Norman points out that if we had a human doing the task as part of our team, then we’d typically have a conversational interaction. One where we’d get updates and feedback.

To make his point and to demonstrate that this issue occurs independent of any given technology, Norman takes 3 cases studies from aviation and extrapolates from commercial aviation as a whole. As Norman says “aviation is the best documented and validated of all industrial situations.”:

1 The Case of the Loss of Engine Power

A 1985 China Airlines 747 flight where the number 4 engine lost power. Instead of yawing to the right as it normally would, the autopilot compensated as long as it could. When it finally failed to compensate the crew wasn’t left enough time to figure out the problem. This caused the plane to dive almost 32,000 feet before recovery. This severely injured two people and damaged the plane.

2 The Case of the “Incapacitated” Pilot

Here Norman uses a non-technological example to show that this isn’t really a technology problem per se, using delegation as a form of automation. Suggesting that if you “ask” the auto-pilot to do something or a crew member, in your world view, its still “automated”.

This case study involves a commuter aircraft that crashed while attempting to land where the captain was killed and the first officer along with six other passengers were severely injured.

In this case the first officer noticed that the approach for landing wasn’t right and told the captain, though the captain didn’t respond, even though he was required to. The captain hardly ever responded though (according to multiple pilots for the airline).

Norman points out there was a ton of social pressure on the first officer as well, the captain was the president of the airline who had just hired the first officer.

The accident report gives some health information and Norman also adds that it may be possible that the captain was already dead or dying of a heart attack and in this instance was why he may have been unable to respond. But from the outside this lack of response was indistinguishable from all previous occasions.

From the accident report:

“The first officer testified that he made all the required callouts except the “no contact” call and that the captain did not acknowledge any of his calls. Because the captain rarely acknowledged calls, even calls such as one dot low (about 50 ft below the 3° glide slope) this lack of response probably would not have alerted the first officer to any physiologic incapacitation of the captain. However, the first officer should have been concerned by the aircraft’s steep glidepath, the excessive descent rate, and the high airspeed”

Lest you think Norman has just picked very obscure cases, he cites a simulator study by United Airlines:

“when the captain feigned subtle incapacitation while flying the aircraft during an approach, 25 percent of the aircraft hit the “ground”. The study also showed a significant reluctance of the first officer to take control of the aircraft. It required between 30 sec and 4 min for the other crewmember to recognize that the captain was incapacitated and to correct the situation”

3 The Case of the Fuel Leak

Here the first officer noticed that fuel was being used from a tank he hadn’t expected to be and told the second officer and sent him to go look. It was then noticed that the wheel was turned to the right. Autopilot was disengaged to check which allowed them to notice that they were “about 2,000 lbs out of balance”.

The second officer then reported they were losing a large amount of fuel. This original observation and subsequent investigation by the second officer allowed the problem to be brought to the attention of the crew, then diagnosed and fixed, even though the second officer at the time he noticed the aberration, wasn’t sure what the problem was.

Norman uses case 3 to speculate about what might be possible if automation advanced.

“Suppose the automatic pilot could have signaled the crew that it was starting to compensate the balance more than was usual, or at the least, more than when the autopilot was first engaged? This would have alerted the crew to a potential problem. Technically, this information was available to the crew, because the autopilot controls the aircraft by physically moving the real instruments and controls, in this situation, by rotating the control wheel to maintain balance. The slow but consistent turning of the wheel could have been noted by any of the three crew members. This is a subtle cue, however, and it was not noted by either the pilot or the co-pilot (the first officer) until after the second officer had reported the fuel unbalance and had left the cockpit.”

Each of these case studies are examples of crew being “out of the loop,” unable to get feedback when needed and respond accordingly. Instead, they only receive information when some essentially silent compensatory mechanism fails.

Norman points out that these sort of “out of the loop” problems, don’t tend to occur in social, effective teams and suggests that someday this could be a model of how our automation interacts with us.

He admits that presenting feedback in an effective way is not easy, but also suggests that a model that we’re likely all familiar with is especially ineffective:

“One of the problems of modern automation is the unintelligent use of alarms, each individual instrument having a single threshold condition that it uses to sound a buzzer or flash a message to the operator, warning of problems. The proliferation of these alarms and the general unreliability of these single-threshold events causes much difficulty”

It might be easy to say here, well there was feedback, what about the control wheel in case 3?

Once again Norman has thought ahead here. While he was writing he went through an information simulation of 727 and yet again the same thing occurred. The second officer again missed the wheel position as indicator of a problem, even though he’d already read the accident report! He was busy performing his normal duties and was focused on that.

While Norman doesn’t give us all the answers here, he certainly equips us with some ways to rethink our automation as we develop it and some potential goals. Does your automation isolate humans and silently compensate? Are your alerts single threshold conditions? Is your feedback perhaps too subtle, like the control wheel?

REdeploy Conference Videos
I had the opportunity to attend Mary Thengvall and J. Paul Reed’s excellent conference, REdeploy and now that the videos are posted, I highly recommend that you check them out.

REdeploy is “A conference exploring the intersections of resilient technology, organizations, and people.” and was a big influence for me to start exploring more of these sorts of subjects.

I’ve been making my way back through these videos (and learning more each time). They’re all great, but if you’re looking for a place to start, a few that have been on my mind lately are:

  • John Allspaw’s In the Center of the Cyclone: Finding Sources of Resilience
  • Matty Stratton’s Fight, Flight, or Freeze — Releasing Organizational Trauma
  • Jessica Kerr’s The Origins of Opera and the Future of Programming
← Applying systems thinking to analyze and learn from events
Human error and the problem of causality in analysis of accidents →

Subscribe to Resilience Roundup

Subscribe to the newsletter.