Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent Activity

Ten Challenges for Making Automation a “Team Player” in Joint Human-Agent Activity

This is a paper by Gary Klein, David Woods, Jeffrey Bradshaw, Robert Hoffman, and Paul Feltovich. If some of the material sounds familiar, it may be because this work is based off of a much longer paper which I covered here and here.

While the base work focussed a lot on how it is humans coordinate, which can be very helpful to know and understand when working with humans or automation, this paper specifically focuses on the challenges that automation faces given the requirements of joint activity.

Joint activity among people requires four things:

  • They agree to work together (the basic compact)
  • They are both predictable.
    • If you cannot predict someone else’s actions at all, you can’t accomplish a joint activity since you have no idea what they’ll do next. The same is true in the reverse.
  • They are both directable.
    • If nothing you say or do changes anothers' behavior, then you’re not really working on a joint activity, just nearby each other.
  • They maintain common ground

Notice it isn’t that they have common ground, it’s that it’s mainted. Common ground must be continually monitored and repaired. Common ground is made up of things like knowledge you share, assumptions you’re making. These are what let the other person understand your communications and signals as the activity progresses.

10 Challenges

Using the requirements of joint activity as a guide, the authors walk us through 10 challenges of using automation as if it were a person on your team, where that is sort of an idealized workflow, that you get to interact with the automation in such a way that it’s not something that slows you down or becomes incomprehensible, but is on your team working towards the same goals.

In some cases I’ve paraphrased the challenges for clarity here, but I’ve kept the meaning the same in all of them.

Challenge 1: fulfilling the requirements of the basic compact

Right now automation, even “intelligent agents” can’t really “understand” the goals of your team or your organization, so they can’t signal whether or not they’ll enter into the basic compact or signal when they’re leaving it.

Challenge 2: modeling other participants intentions and actions

Others can be predicatble to us because we can model what their intentions or actions might be. We can predict if they might be struggling with a task or if they’re cruising along. Automation can’t really do this with us, and in many cases we can’t model what the automation will do either.

As the authors point out:

" No form of automation today or on the horizon can enter fully into the rich forms of Basic Compact that are used among people"

Challenge 3: Human-agent team members must be mutually predicatable

I’ve emphasized mutually here because I think this is an important point that can get missed. So often, especially in software it seems, we treat expertise in working with some automation as a sort of gold standard we should aspire to. Expertise is of course valuable, and being an expert in using the automation is probably more desirable than not given the state of such things today. But why do we have to be?

It is in part beause we cannot predict what the automation will do. Some automations become more and more flexible, but this just makes them harder and harder to predict.

" agents’ ‘intelligence’ and autonomy work directly against the confidence that people have in their predictability."

Chalenge 4: Agents must be directable

Given some of the other challenges we’ve discussed, we can see that automation isn’t always directable. How many times have you found that you can give some software commands, but you can’t really tell it what to do or what your’e trying to accomplish.

The authors suggest that by having automation adhere to policies, we can improve on this. Policies would allow behavior change of automation without neccesarily having code changes. These policies would allow people to set boundaries on the behavior of automation in different cases and change it as needed.

Challenge 5: Relevant status and intentions must be obvious to other team members

Building off of the authors, we can see that when we can’t really direct some automation and we can’t really predict what it might do or we can’t model it very well, then we can’t see things like if it’s struggling to accomplish something. This is what can lead to the decompensation failure pattern.

The automation is compensating, which can hide the underlying issue. It begins to reach the edge of what it can compensate for, but does so silently. When it can no longer compensate the failure is then felt very suddenly. Because this is a pattern of how adaptive systems can fail, we can see it in everything from airplane wing deicers, to human blood pressure.

The authors point out that this can leave humans “wondering what the automation is currently doing, why it’s doing that, and what it will do next”.

Challenge 6: Agents must be able to observe and interpret pertinent signals of status and intentions.

Just as we need an automation or other team members status and intention to be obvious, we need the automation to be able to understand our expressions and signaling of the same.

Sending signals alone doesn’t do anything if they can’t be interpreted. The authors remind us that this not a new challenge:

" This is consistent with the Mirror-Mirror principle of HCC [Human-Centered Computing]: Every participant in a complex sociotechnical system will form a model of the other participant agents as well as a model of the controlled process and its environment"

There are some, such as Charles Billings and David Woods (one of the authors), who believe that this challenge may always exist. That there will always be some gap between a human’s ability to coordinate and a machine’s, such that designing human-agent teams will always have difficulties.

Challenge 7: Automation must be able to negotiate goals

Very often, what we set out to do doesn’t remain the goal or at least the only one for long. In an incident for example, you might start with a goal of mitigating some disturbance but as time goes on and understanding grows, you may find that your goal may be to only keep parts of the system safe from destruction.

This means we need automation to communicate with us about what it’s current goals are and also what it’s future goals might be. In many cases, automation can produce an output that can make it seem like it’d apply to all situations. We know this isn’t the case though, and are reminded that:

" This approach isn’t compatible with what we know about optimal coordination in human- agent interaction."

Which to me seems like a much more diplomatic and academic way of saying, it just isn’t going to work.

Challenge 8: Support technologies for planning and autonomy must enable a collaborative approach.

When we take a collaborative autonomy approach to problem solving and work, we accept that the steps build upon each other and are never set in stone. This means that automation that overcomes this challenge would have to have every part of it designed with this in mind:

“every element of an “autonomous” system will have to be designed to facilitate the kind of give-and-take that quintessentially characterizes natural and effective team- work among groups of people”

The authors cite the work of James Allen and George Ferguson on Collaborative Mangement Agents as an example as they were designed to support various team mixes including human-human and human-agent.

Challenge 9: Agents must be able to participate in managing attention.

This stems from the need to continually repair common ground. Team members must be able to direct attention to signs, signals, or activies that are important. This lack of in automation also contributes to the decompensation failure pattern. An automation may be compensating, but it doesn’t direct attention to the underlying problem nor does it direct attention to the fact that it’s nearing is compensatory limits.

The authors point out that often the solution is to have sort of threshold crossing alarms, sometimes they really seem to get us software folks, I know I’ve done that before. But they point out that:

" in practice, rigid and context-insensitive thresholds will typically be crossed too early (resulting in an agent that speaks up too often, too soon) or too late (resulting in an agent that’s too silent, speaking up too little)"

Which also sounds very familiar to me as well 🙂

Challenge 10: All team members must help control the costs of coordinated activity.

If all this coordination sounds like hard work, don’t worry, you’re right! Coordinating with humans or automation takes time and effort. When we work with other humans we have an implicit understanding of this and that we’ll do what we can (however limited that may be) to help control those costs.

A key part of this, that is a good rule of thumb for automation design in general I think, though hard to accomplish is that “the agents must conform to the operators’ needs rather than require operators to adapt to them.”

So what now?

Now that you know what the challenges are, there are a few different things you can do with this infomration. You can use it when you design your own systems or when you evaluate if some automation is a good fit for your team or a task.

Or perhaps it can serve as a reminder that technology doesn’t always improve coordination and can hurt it instead.

If you’re someone who enjoys video, I’d be remiss if I didn’t point out that Jessica Kerr talks about some of these challenges in her awesome talk: Principles of Collaborative Automation

Takeaways

  • If you examine what makes coordination among humans possible, a number of challenges for automation become more clear.
    • Many of these are because the automation does not communicate to the human parts of the system, or do so too late or too noisily.
  • David Woods advocates the idea that there will always be a gap here.
  • Many challenges stem from automation not communicating it’s status with us like a teammate would.
  • The cost of coordination can be high, it’s expect that everyone will do what they can to lower the cost, but automation doesn’t do this and often causes it to be more expensive.
  • We can use knowledge of these challenges to help design or evaluate systems for collaboration.

Discussion

Last week I hosted the first round of the Resilience Roundup Discussion Group, we’ll be discussing this paper this Friday. If you’d like to be a part you can sign up here.

Last week we discussed some questions such as:

  • What can be actually done to encourage or create resilience as graceful extensability?
  • How does graceful extensability differ from sustained adaptability and what does that mean for us?
  • What can we do to influence our organization if we don’t feel we have management buy-in yet on some of these ideas?

If you have questions of your own about this paper or related concepts, sign up here so you can be invited this Friday.

← Managing the Hidden Costs of Coordination
Four concepts for resilience and the implications for the future of resilience engineering →

Subscribe to Resilience Roundup

Subscribe to the newsletter.