This is a paper by Emily Patterson, David Woods, Richard Cook, and Marta Render. In case you hadn’t seen me mention before, both Woods and Cook are part of Adaptive Capacity Labs who sponsor this issue, but they never get to choose what I review or write here.
The authors go over “collaborative cross-checking” which they define as:
“a strategy where at least two individuals or groups with different perspectives examine the others’ assumptions and/or actions to assess validity or accuracy.”
They also include 3 example cases from medicine which I strongly suggest you check out as they also include their commentary on why (or why not) they various cross-checks were effective. They’re fairly short and presented in a table so you can follow both the recorded interaction and the authors' commentary.
The different perspectives can be differences on things like: experience, stance, goals, authority, or knowledge.
The primary goal of this process is to discover a some sort of mistaken assessment or action in time to do something about it. While this is very much a short term focus, some longer term benefits can be seen as well.
The authors are careful to remind us that this, like other strategies, aren’t “free”. They incur a real, cognitive cost. They require effort and time. And if a problem is found, then plans of course need to change, which in itself can add complexity.
Notice the effort bit. Because this is an effortful strategy, one that requires the humans participating to spend cognitive resources, you can’t just have a computer ask you “are you sure?” as much as I’m sure you all miss Clippy questioning your decisions.
Additionally because it’s meant to be effortful, when this process is made into a routine, especially when the problem rate is low, it’s not likely to work. I’m sure we’ve all seen some form of this, a process like this starts out as effective and intentional and when formalized into routine becomes another box to check.
This strategy may not work in all cases though. For example two people performing a “monitoring task” may not benefit. I’d argue that’s not really collaborative or cross-checking though. Additionally, I’ve discussed before about how one should be wary of the idea of “vigilance decrement” in such a situation.
How much change actually takes place as a result of these cross-checking sessions is also dependent upon your team and organization. If things that are found are celebrated, then you’ll probably see better results than say if things are found they’re covered up or seen as bad on the part of the people who did the original planning.
There are number of benefits to be realized though when done well is high, it can help:
- reveal hidden assumptions
- clarify goal trade-offs
- explore new solutions
- identify side effects
- identify boundary conditions
- discover contingencies
- find information gaps
- identify people who might be able to support or obstruct the plan.
Even if those things aren’t realized (and you may not know right away if they are), there are some long term benefits to employing strategies like this, such as:
- Becoming more aware of other groups affected or involved in a plan
- Improved awareness of others' need for information
- Increased ability to anticipate others' perspectives as situations change.
- Increased awareness of other perspectives (which helps create common ground)
These long term benefits are why I’ve personally recommended these sorts of activities and strategies to teams, especially ones that have to do incident response work. After a while of doing this you tend to see some other benefits as well. The authors call these “long term second-order benefits” like:
- Transfer of knowledge across perspectives
- Increased team identity across previously dis-
tinct individuals or groups
- Improved coordination across “stovepiped” groups
in an organization
- Practice and ability to identify ineffective cross-checking strategies
- And the chance to recognize opportunities for system and training redesign.
On reviewing the cases the authors found some patterns to guide us. One that really stood out to me is:
“personnel with weakly defined roles who are not consumed by production pressures can support collaborative cross-checking and other cognitively challenging sensemaking functions.”
This sounds almost exactly like what happens when I’ve been in an SRE type role and I’m not currently on-call. I have a weakly defined role (make sure things don’t go too horribly wrong) and an opportunity to support others under production pressures trying to make sense of things (“WTF?!?").
They also highlight something that I’ve been coaching response teams in:
“processes can be rendered more observable either by explicitly communicating the rationale behind a plan and the intent behind an order or by supporting the ability for people in loosely coupled roles to “listen in” on planning discussions
Telling people what you’re doing and why can help make this (and other strategies more effective).
- Collaborative cross checking is a way of helping to detect misassessment or incorrect action in time to do something about it.
- Studies from fields such as nuclear power, aviation, and medicine show success.
- In order to be effective cross checking needs to take place between individuals or groups with differences.
- Those differences could be things like experience, authority, stance or goals.
- When cross checking is made too routine it can become much less effective.
- The benefits of cross-checking are intended to be short term, but long term benefits can arise as well.
- Some benefits, while short term, may not be immediately obvious.
- In case you missed that last line: Telling people what you’re doing and why can help make this (and other strategies more effective).