Ironies of Automation

Ironies of Automation
I’ve been hearing this paper get discussed quite a bit recently in various circles so I thought it’d be a good time to revisit it.


This is a paper by Lisanne Bainbridge from the Conference on Analysis, Design, and Evaluation of Man-Machine Systems.

Bainbridge points out right off the bat that even automated systems need some amount of human supervision, because of this study even though systems are man machine systems. As a result, human factors are important here.

Mostly the focus is on process systems, but airline cockpits get mentioned as well.

The two ironies are:

  1. Errors that designers make when designing a system can be a major source of operating problems

  2. When designers try to get rid of the human, that human is still left to do whatever the designer could not figure out how to automate.

This leads to a few different problems. For one, the human being left with a mixed bag of tasks to do with no support. Also, if the computer is doing much of the decision-making because it’s faster than a human, then how can the human verify it is working correctly under time pressure?

There is also the danger that moved from the process, the operator skills will atrophy. Without practice for the manual control periods it can be difficult for the operator to intervene when they need to.

Scattered tasks

If the human is left to only do the seemingly random bits that the desire could not figure out how to automate then it can be difficult for the operator to have a chance to practice their skills and also understand the whole process. This can result in situations where the operator doesn’t have enough knowledge to generate new, successful strategies.

It also removes the opportunity for the operator to get feedback about the effectiveness of current strategies.

Monitoring

Another common view is that that the human can simply monitor the process and perhaps alert someone more skilled if something goes wrong. We’ve already discussed the problems with monitoring under time pressure versus the computer above. But there is the additional problem that vigilance studies from at least the 1950s seem to indicate that it is hard for humans to effectively monitor a mostly unchanging thing for more than about a half-hour.

This means we must do similarly to what we do in software, which is have specialized displays and alarms. But if the process is especially high consequence we can end up back in the problem where the human in the loop doesn’t know how to effectively diagnose if the alarms are working correctly.

“If the computer is being used to make the decisions because human judgment and intuitive reasoning are not adequate in this context, then which of the decisions is to be accepted? The human monitor has been given an impossible task.”

Further, the automatic systems can cover up system failure is occurring by silently intervening and hiding trends until the system collapses. You may recognize this as decompensation pattern in how adaptive systems fail.

Collaboration

“By taking away the easy parts of his task, automation can make the difficult parts of the human operator task more difficult.”

Bainbridge tells us that simply assigning tasks to the human or the machine that they seem best at is not a workable approach. This is because it doesn’t consider how they will work together. This is also the same idea that is express in Cognitive Systems Engineering.

Instead, Bainbridge suggests that computers could be used to help support decision-making and provide checks against the effect of actions instead of barriers.

She also cautioned that while software can be used to make displays that are relevant to the operator it is possible to create an interface that is okay for normal, low tempo conditions, but bad for abnormal situations.

This is something that I’ve seen fairly often software teams, dashboards are more often made for times when there is a low level of activity and no time pressure, as opposed to emergency use cases.

Ultimately Bainbridge concludes “the human beings know which tasks the computer is dealing with and how”.

Takeaways

  • Having humans and computers each do what they’re good at is not an effective approach
    • It is better to have them collaborate as a whole system
  • If a system is designed to automate as much as possible, then humans are left with whatever the designer couldn’t figure out how to automate.
    • This leaves the human with scattered, seemingly unrelated tasks
  • Without the chance for feedback and practice, automation can harm an operator’s ability to intervene when things go wrong
← A Sense Making Experiment
How Adaptive Systems Fail →

Subscribe to Resilience Roundup

Subscribe to the newsletter.