This is a chapter in Resilient Health Care by Robert Jay Stevens, David D. Woods, and Emily S. Patterson.
I really like this chapter because it takes some of the work that David Woods has been doing around describing the adaptive universe (which I’ll be writing about over the next while) and gives it some concrete examples.
Often, when the theory gets described in general terms, you hear things like agents do this and the system that does this, but it can be hard to understand what that means without a concrete example, which this provides.
This chapter looks at a study that was done about how an emergency room in a functioned and then took that information and examines it through the lens of resilience engineering.
P.S. I’d love to hear if this week’s issue helped you and how. Please reply and let me know!
Patient boarding in the emergency department as a symptom of complexity-induced risks This chapter looks at a study that was done about how an emergency room in a functioned and then took that information and examines it through the lens of resilience engineering.
The essential problem is that there is time when a patient is essentially done in the emergency department, but needs further care. Sometimes this is because their condition is severe, so they’ll go to the ICU or perhaps it’s a longer term problem where they’ll need a specialized care facility.
On the surface this sounds easy enough. When people are done, they go to where they need to go. Unfortunately, it doesn’t end up working that way, at least in any hospital I’ve been in and certainly not in this study. What ends up happening is patients often have to wait for a bed as some sort of negotiation needs to happen between the various departments.
The authors give the example of one of the ways of negotiating and coping with this is through the ICU asking the emergency department to put in a central line before they’ll accept a patient. This is typically something I would expect the ICU to do.
So this is a shifting workload from the ICU to the emergency department. This is to protect what Woods calls capacity for maneuver or CfM. CFM is the measure of how much adaptability or room there is to respond to a new challenge.
This isn’t really an exact measurement. In this case, we’re not saying that they have 10 CfM or 1 CfM, but thinking in these terms, we can say that the ICU is defending their capacity for maneuver by shifting work onto the emergency department.
“the emergency medicine system absorb much of the burden of the chronic problems facing healthcare in the United States today.”
This quote really rang true for me in my experience in emergency rooms. But it also means that this a good place to look at and learn from.
In terms of CfM, lower CfM indicates that the system as a whole is risking breakdown and these things like that shifting work are forms of adaptation to try and avoid that.
There are different forms of adaptive system breakdown. The first is decompensation, a failure mode where the system or part of it exhausts its capacity to maneuver. They’re no longer able to adapt and respond to new input or challenges. Decompensation can happen very suddenly or it can happen gracefully.
As the author’s note, units and systems tend to be at least implicitly aware that there’s a risk of decompensation. You can see this in this hospital because these people are adapting at least for their area to preserve CfM.
The second form of adaptive system breakdown is working at cross purposes.
This form of breakdown occurs when different parts of this system are each working to achieve their own local goals and they’re making adaptations to do so. But in this case, those adaptations or actually making it more difficult for other parts of the system to do what they need to.
Each group is taking action to reduce the risk of decompensation. But while they’re doing that they fall into this pattern of working at cross purposes, where they are not working towards a goal they all share or a long time goal like the benefit of the patient.
Essentially this failure mode is: “behavior that is locally adaptive but globally maladaptive.”
The authors categorize the observations made from the hospital into three different groups as far as the types of adaptations.
- Responses that defended their CfM from being reduced by others.
- Responses that internally readjusted their CfM by making some sort of local change, like changing roles or teams or activities
- Responses that were made to improve coordination across teams.
Even though these things that these teams are doing are allowing them to continue to adapt and respond, things like shifting workloads or hiding resources or reconfiguring teams, when you zoom out, it doesn’t really have much of an impact on making the system as a whole better. In some cases it may even be counterproductive.
One way to think about this, is of the model of the system having multiple layers and perspectives (as Woods would later elaborate on). Continually shifting perspective can allow one to see this sort of breakdown, instead of just being stuck in a single view.
There’s been some research to help identify some sort of principles that can be used for coordinating across these sorts different groups. Usually this is called an interdependent or polycentric network, a network of communities formed for a common goal.
Research that looks at this looks at a layered network design and polycentric governance (more on that in future issues), but just knowing that these exist, the authors say that the tactics that this hospital displays, even in the case where they are trying to improve coordination, they’re failing to act on some of the latest thinking on this.
This is especially noticeable in the common pattern that occurs when organizations are looking down the barrel of adaptive system breakdown because of this failure mode of working at cross purposes. Often times they will resort to having a more centralized command and control structure. But just like we’re seeing in the hospital study, and supported by other research, this typically is not that effective.
Nor is adding some other group to help facilitate communications between the original two groups. As this hospital had tried, having some sort of patient transfer group who would facilitate coordination between the ED and the ICU. What they saw was it was a slow process to coordinate through these people.
Even with this coordination group formed, physicians from both sides would actually go visit the physical location of the other department (ED docs going to the ICU and vice versa) to independently assess their capacity.
This ineffectiveness of adding a new group seems to be because it creates new burdens while at the same time reducing others. So instead of helping overall, they’re really just shifting where there’s trouble in this coordination.
I think this is really important for us in software and technology. I’ve seen it a number of times, that instead of teams being created to solve a larger problem, they’re created to facilitate communication in response to a more local disfunction.
The authors sort of flatly (and hilariously) say:
The design principles for such a unit have yet to be determined.
Which I think is a great summary and understatement of the problem.
We can’t create teams effectively whose sole purpose is to smooth out interconnects between other teams. That’s not to say we can’t have teams that offer tools and techniques and support, but that we need to be wary of this model of ‘we’ll just add an intermediary and that’ll fix things’.
The authors note that we can use technology though to help mediate these interactions, but at least in the hospital case they say begin “the computer tools for building such common ground are quite weak” and I think it a lot of our organizations, this can be the case.
Reciprocity is something that seems to help these layered networks greatly. As evidenced by the lack of trust that existed in this system, to the degree that doctors are checking on each other’s departments, this was absent in the hospital case.
Reciprocity is where one part of the system, team, group, or unit essentially trust each other. And as a result does something that gives up some immediate benefit so that in the long run it’s better for both. At the same time, that unit who gave something up is expecting and trusting that the second unit will be available to do something similar in the future. That is give something up in service of the larger goals.
The authors warn us though, that modeling these patients being stuck in the ED as a symptom of a complex system doesn’t really matter unless that model helps generate new ways to reduce the risk for patients. If we’re just modeling for modeling sake, there’s really no point.
- Adaptations that work on one scale can harm others
- Capacity for maneuver (CfM) is a way of measuring and thinking about the reserve a part of a system may have to respond to a challenge
- There are multiple forms of adaptive system breakdown including: decompensation (where there is no more CfM) and working at cross purposes (local goals hurt global ones)
- Reciprocity is very important in effective, multi-group work
- Teams created solely to mediate friction between two teams are more likely just moving that trouble around.
- Software systems can help build common ground in these interactions, but the tooling must be strong
**Who are you? ** I’m Thai Wood and I help teams build better software and systems
Want to work together? You can learn more about working with me here: https://ThaiWood.IO/consulting
Can you help me with this [question, paper, code, architecture, system]? I don’t have all of the answers, but I have some! Hit reply and I’ll get right back to you as soon as I can.
**Did someone awesome share this with you? ** That’s great! You can sign up yourself here: https://ResilienceRoundup.com
Want to send me a note? My postal address is 304 S. Jones Blvd #2292, Las Vegas, NV 89107