A Shared Pilot-Autopilot Control Architecture for Resilient Flight

If we’ve spoken elsewhere or you follow me on Twitter, then you already know that I believe Black lives matter, but in case you don’t follow me and need to hear it, here it is: Black lives matter. You don’t need to hear from me on these issues though, personally I’ve been working with my local community to understand where I can help, asking, listening, and following through. If you don’t know where to start, here are some resources that I’ve had recommended to me: https://blacklivesmatters.carrd.co/


A Shared Pilot-Autopilot Control Architecture for Resilient Flight

When I discuss resilience with readers, especially new ones, I often get the question “so just how can we engineer resilience?” As Dr. Richard Cook has pointed out in his excellent talk, A Few Observations on the Marvelous Resilience of Bone there are two different types of resilience engineering. One is where we find sources of resilience, we discover them since they were there all along. Then we guide or amplify those sources. This is the most common form of resilience engineering that the research tells us about. More recently, a second sort is starting to become available to us which is one in which we alter the resilience, which requires a very deep understanding of it.

This paper by Amir Farjadian, Benjamin Thomsen, Anuradha Annaswamy, and David Woods is an example closer to the latter. This example can be helpful if you or your organization find itself asking a question like What might it mean to take some of the ideas of resilience and apply them in a very concrete way? Because this is a very specific use case, and still somewhat theoretical, there is a lot of math in it. But don’t worry if you don’t understand it, I don’t believe that it’s really needed in order to get the benefit from the paper. Where the benefit really I think is in showing how certain ideas can be applied in a less abstract way.

The authors take a look at engineering a plane’s actuators and autopilot, such that it is more resilient as opposed to more traditional models that tend to be more robust. (For more on the differences, see the Four Concepts for Resilience. This paper draws on Woods' previous work, some of which we’ve discussed here, but instead of looking at graceful extensibility, instead this time looking at graceful degradation, but will still looking at capacity for maneuver.

Capacity for maneuver (CfM) is a measure of how much adaptability or room to respond to a new challenge that a given part of the system has, whether a person or autonomous agent. Since it is a measure of capacity it can depleted, restored. It can be increased depending on how other parts of the system interact. Generally the greater the capacity for maneuver a part of the system has the more resilient it is likely to be. This is because it can adapt more to unanticipated disturbances in the system. On the other hand, we would say something with very low capacity from maneuver would be quite brittle as any unexpected disturbances would exhaust its capacity to adapt.

You don’t need to be able to represent your system in a complex formula like this, but I think that this paper raises great questions for other organizations. How might you raise your capacity for maneuver? You can ask yourself these questions and move toward a more concrete understanding of how principles of resilience could apply.

Some questions to ask could include:

  • How would you even measure capacity for maneuver?
  • What would that look like?

It may not be the same answer from every person in every view. As Woods discusses in his previous work and demonstrated here, it’s important that agents be able to assess their current capacity. If they’re unable to assess their current capacity, they’ll be unable to respond if they have low capacity and seek out more. Conversely, if they can’t measure it and they have very high capacity for maneuver they do will be unable to put that to use helping other parts of the system.

It’s also important to note that those this team looked at actuators and autopilots, it didn’t remove the human from the loop. Humans were expected to input a measurement of how bad the disturbance was, which was used to help calibrate later interventions. There was also a trade-off between how well the controls respond to the input and the CfM. How much the automated side of the system “trusted” the human agent depended on the number of flight hours logged. Additionally, time to respond and give the input was considered.

Ultimately, it was shown that this resilient setup was able to outperform more traditional setups in standard models taking into account but more and less skilled pilots.

Elsewhere when Woods discusses the more theoretical side of capacity for maneuver, he mentions that there are multiple views or echelons that you can examine system from. Ultimately you need to look across all these different views in order to get an idea of how the system is functioning as a whole. As you look across those layers, the things that constitutes and agent’s capacity for maneuver may change. Here we see that capacity for an maneuver is very well defined in this this one view of the system. The capacity for maneuver might be different for different parts of the system, for different agents.

For a tech company it may be the amount of downtime that your incident response teams have between shifts. Perhaps it is the breath or depth of authority that your responders have. Perhaps for an organization or pillar that does incident response itself maybe it’s the number responders where that pillar is collectively able to be seen as an agent.

The advantage of thinking in these terms about what your organizations capacities are is that even when you don’t arrive at a formula like this that can be broken down into an exact number, it can clarify what the organization can or is willing to do to increase those capacities. You can both plan for the future and investigate how the organization has historically responded when these capacities are low and perhaps even look at why that is that they respond that way.

Of course one person is going to have a different view of this than another, that’s perfectly fine, and in fact I’d say is encouraged. Again Woods speaks to us about those different echelons. It need not map directly to different tiers of a hierarchical organization, but certainly across those levels we would expect different views and different abilities to marshal further capacity. Looking at the organization this way can also alert you to some mismatches. Perhaps if a responder had some of the authority to marshal the resources that currently only a higher level executive has, there might be a drastic increase in the CfM.

Takeaways

  • Investigating what measures reflect capacity for maneuver can help teams and organizations assess where to invest in resilience.
  • There are two types of resilience engineering, one in which we take advantage of sources of resilience, this is the most common and the most doable. Another is increasingly developing where we are able to examine the system and increase resilience.
  • Capacity for maneuver need not be necessarily broken down into a complex math format to be useful, seeing how it can be can be useful.
  • Principles of resilience can be used as guides when examining your systems teams and organizations, but they can also be used in a very specific way like this.
  • What constitutes capacity for maneuver may differ depending on what part of the system you’re looking at, but still exists.
← Safe operation as a social construct
Coping With a Mass Casualty: Insights into a Hospital’s Emergency Response and Adaptations After the Formosa Fun Coast Dust Explosion →

Subscribe to Resilience Roundup

Subscribe to the newsletter.