This is a chapter from Resilience Engineering and Practice from some familiar names, Johan Bergström, Nicklas Dahlström, Sidney Dekker, and Kurt Petersen. We’ve heard from some the authors together before in back in issue 4
This time they’re back to talk about how to develop training to increase organizational resilience.
Training organizational resilience in escalating situations
This has a direct correlation to many different areas including incident response, software or otherwise. The training methods outlined here are in contrast to what might be seen as typical training, that is just drilling “correct” action into someone, “rote training”.
The sort of training the authors talk about is based on a framework that they provide. This framework can be used to explain a bit about organizational resilience and also as a tool for deciding if decisions or actions in a scenario were contributing to resilience.
The authors call these “generic competencies” since this framework is for many different domains, not a specific one. These competencies are competencies the team should have, not things that one person or a small number of people should have.
The framework is a circle starting at:
Information management :
- explicit goal formulations
- sort and prioritize
Then communication coordination :
- Roles and responsibilities
- Balancing flexibility and rigidity
Moving on to decision making :
- Distributing the decision-making process
Moving on to effect control :
- Monitoring effects
- updating goals
- updating process
Then circles back to information management.
In an escalating situation, there are two effects on data processing. One is that you have an increased amount of data to process. This can be good since you now have more information to draw from but also can be a problem since it’s now more difficult to find something meaningful in that large pool of data. Woods, Patterson and Roth call this the “data availability paradox”. The authors cite Dörner’s advice on how to deal with this: by using shared and explicit goals. Teams can use these goals once they’ve made them, to filter what data is important and who to distribute it to.
It’s important to know what other team members roles are and what tasks they’ll do as a result. There are still two main problems for team structure during an incident:
It can be difficult to recognize what the demand on each team member is which can cause them to get overloaded. It’s important that team members keep an eye on this and reassign tasks or get more resources if possible.
Incidents don’t care about your response model and may not always fit. As a result, teams need to be continually looking at whether their chosen organizational structure is a good fit for the situation. This can be somewhat avoided by maintaining a more flexible structure.
This area is all about strategies that can be used to make decisions.
With a high-level of incoming information along with a continually changing situation, using consensus to make decisions will be impossible. There’s no way to intake all the information and still be proactive.
The authors warned that the answer this is not a rigid hierarchical structure either especially one with like a team leader making all of the decisions, some sort of captain. That team leader would fall the same problem of becoming overloaded.
Instead, the antidote is to use those shared and explicit goals to guide the decisions to be made. Then, decision making must be distributed across the team. This is different than getting all team members to make a single decision, instead, all team members will be making some decisions.
In order for this to be successful,the team must develop effective strategies for sharing information. Especially about what decisions they made and what new goals need to be pursued as a result.
This is “constantly monitoring and updating the process”. This is where you question your goals and possibly update them as a result. Maybe update who is responsible for what. Perhaps change what task you for working on. The authors suggest the question “what could be wrong in our understanding of the situation?”.
So given this framework what does this mean for effective training scenarios?
If the training is to help people practice these competencies in escalating situations, then the scenarios need to simulate at least some parts of escalating situations. The authors cite Woods and Patterson’s summary of this:
- There is a cascade of effects in the monitored process
- The cascade of effects should demand an increase of cognitive activities among the participants
- the cascade of effects should demand an increase in coordination among the participants.
- The process cascading should therefore not be isolated to one particular participant’s area of responsibility, but instead meant different reactions by all participants.
- The cascade and escalation should be a dynamic process
Other facets of the situation that can help maximize how useful it is:
- “Try to force people beyond their learned rules and routines”
- “Contain a number of hidden goals”. This is not a matter of tricking people, but the idea that there should be different ways people could solve the scenario, for example, de-escalating it, but they should be required to vocalize the solution. This helps the practice as these situations cannot typically be solved by a single individual.
- The scenario ideally should also include consequences for actions that might be difficult to predict so that people are forced into proactive thinking and as a result, our first articulate their expectations.
- There may also be utility in creating scenarios where fixation errors can occur.
- Alternatively, the simulation could create a lot of noise that might normally tip them into “ thematic vagabonding” forcing the team to confront that.
So does this work?
To find out if the framework would hold up, the tested it with a Swedish rescue team.
They put them in a two-day simulation across two groups to test this with two other groups that didn’t get this training.Then they had to participate in a simulated response: 5 to 7 people would have different roles on a passenger boat that was caught in a storm in the Atlantic.
During the simulation, different events would occur that would force the team to establish strategies from the framework or else the situation would escalate beyond their control.
They included things early on like a drunk passenger, or a small fire, or an injured child.
The participants were receiving printouts for updates of data and only had blueprints and maps. This meant that they had no established visual visualization of the simulation and were not given predefined strategies for how to cope.
They then measured the outcome, things like number of injuries, casualties, and damage the ship. But also “process”: a qualitative assessment of how the group managed.
Both groups were initially unsuccessful. But both groups did improve the second day and said that the exercise had been helpful.
The group who didn’t receive the training in these competencies (control), tended to have rigid role structures that would then break down as situations got more difficult. As opposed to those who did have the training (experimental), who would assign a team leader to look over the entire process and make suggestions.
The groups also differed in how they handled briefings. Both groups met for briefings, but the control group had briefings anytime that new information came in and everyone is updated on all information which eventually led to spending more time on briefings than performing work.
Whereas the experimental group used those explicit goals and use the briefings to update each other on the decisions that they had made previously and set new goals.
This allowed them to “focus on expectations rather than history”
There were a number of patterns noticed between the experimental and control groups respectively:
- Indistinct roles at high data load versus hard manuals
- Using goals to establish a proactive process versus reactive process
- Briefing sessions to update each other of decisions made and revise the process versus briefing sessions to establish a consensus on what decisions to make
- Identifies the problems and doing other people’s work versus no understanding for the importance of roles discusses the difficulties in formulating explicit goals and the benefits from doing so versus believes implicit goals are capable of guiding the management
- Discusses difficulties in being proactive versus wrongly believe that some actions were proactive
- Generally good in evaluating their own actions versus expresses belief that in real life are predetermined rules and procedures for all situations.
The biggest difference between these groups came in the debriefing session.
The control groups often got defensive and criticized the simulation saying that their rigid structure would have held up in a “real world crisis”. Whereas the other group believed that they didn’t need the simulation to be perfect in order to learn from the experience.
Additionally, it became clear in the debriefing session that it having the scenario be outside of the participants’ domain can help keep them from becoming defensive or worrying about how their performance looks to their peers.
“These competencies must be practiced, not by drilling prescriptive plans and procedures, but by adhering to the principles of the very nature of unexpected and escalating situations”
Want to work together? You can learn more about working with me here: https://ThaiWood.IO/consulting
Can you help me with this [question, paper, code, architecture, system]? I don’t have all of the answers, but I have some! Hit reply and I’ll get right back to you as soon as I can.
**Did someone awesome share this with you? ** That’s great! You can sign up yourself here: https://ResilienceRoundup.com
Want to send me a note? My postal address is 304 S. Jones Blvd #2292, Las Vegas, NV 89107
Subscribe to Resilience Roundup
Subscribe to the newsletter.