Resilience Roundup - Uncovering the Requirements of Cognitive Work - Issue #30

This week is a shorter overview of the various ways that we can begin to understand cognitive work by Emilie M. Roth. It’s from Human Factors: The Journal of Human Factors and Ergonomics society. Thanks to Richard Cook for brining this article to my attention.


Uncovering the Requirements of Cognitive Work This is a good overview of a few different fields that each came up with their own methods of trying to look into how people, especially experts perform cognitive tasks in their fields.

“One of the hallmarks of the human factors discipline is the ability to analyze and model the requirements of work to support and design more effective systems”.

Who wouldn’t want that? I know I definitely consider these things as I’ve learned more and more about it; as I look at monitoring, as I write tooling, as I develop incident response and review processes.

The author notes that cognitive analysis methodologies is really kind of two parts: cognitive task analysis, CTW, and cognitive work analysis, CWA, but that they’re mostly referred to collectively.

More traditional task analysis methods have existed since at least the fifties, but Roth emphasizes that these newer methods are different in a few ways. They aim to:

  1. Uncover the knowledge and thought processes that underlie observed task performance
  2. To anticipate contributors to performance problems
  3. To specify ways to improve individual and team performance. (E.g. new forms of training, user interfaces or decision aids)

The seeds

These were developed from a few different fields:

The first being cognitive science research. This is where they can drew on cognitive psychology to try to understand what strategies experts were using when doing complex tasks. They asked questions like: what was happening in the in the minds of medical diagnosticians? What was happening when people were troubleshooting electronics? A lot of this area was motivated by a want to make a computer systems that imitated human experts and hopefully would also have then, the ability to tutor people like expert human tutors might. This is where techniques like think aloud protocols and other methods of eliciting expert knowledge had started to come from.

They also developed some ways to model what was going on in the minds of these people, with the idea that if there’s a computational model of this sort of thing, then you can use that to evaluate an interface or design or use it as a place from which to make good design decisions.

Next came cognitive systems engineering research. And if you’ve been reading here for long, this is where most of the names are going to come from that are going to sound, the most familiar to you. This is where Jens Rasmussen, Erik Hollnagel, Morton Lind, Dave Woods, and others did research in response to things like Three Mile Island. This is where they’re looking at a multiple events that had not been anticipated causing traditional training and methods to fail. This is the new framework they invented as a result that they called cognitive systems engineering.

One of the big takeaways there is that there were functional analysis methods that could be used to look at a domain and try to define some of the goals and constraints and draw out what the problem space was from a cognitive perspective. This would be what that these practitioners might actually face. These were based on systems engineering and ecological psychology. They focus a lot on the importance of looking at the domain uniquely. What are some inherent constraints, for example, that might not be present elsewhere.

And then taking the result of these analysis, practitioners in that field could try to then reason about the different system goals or constraints. It would let them zoom out a bit from just tactical work and do better when these unanticipated situations arose.

Next, there was naturalistic decision making research. I’d heard this term “naturalistic decision” thrown around a bit. It first it seemed kind of strange to me, what does nature have to do with it? But really naturalistic decision making is just saying, well, what if we looked at the way experts made decision in real world settings? Not In theory, not in a lab, but what if we just went out and looked at them in the field? Whether that’s firefighters, emergency room nurses, what have you.

And it turns out when they did this, the things they found were pretty different from what was being kind of prescribed at the time from the different decision making models and decision theory. Gary Klein (tk link) is a good example of someone from this field. And at this stage, it became pretty obvious that studying decision making in the real world was very important.

Finally, some distributed cognition and research played a role here. This approach looked at the value of analyzing cognitive work in the work settings that they were taking place in, making sure to consider all the other parts. It focused on how cognition doesn’t just occur in an isolated environment. Even if you’re just talking about one person in office, for example.

“cognitive work of people cannot be understood without reference to other people and external artifacts and the environment that served to increase or offload cognitive demands.”

They give the example of machines or even other people serving as an external memory.

I love it when a plan comes together

Fortunately each of these disciplines took a bit from each other and started to come together. Roth says that there is now a common body of theory and research and a growing consensus that in order to do a “full” cognitive analysis you_must_have an understanding of the specific domain. You can’t just come in as an outsider with no specific domain experience and effectively help practitioners.

Useful methods

So then what are some methods that we can use to help uncover the knowledge that expert practitioners have or the strategies they’re using?

Techniques have been developed like a structured interviews, applied cognitive task analysis, a goal directed task analysis method, critical incident analysis techniques. Critical incident analysis is where a Sidney Dekker, for example, comes in.

Roth notes that one of the best documented, most widely used cognitive task analysis methods is the critical decision method (CDM) developed by Gary Klein and his associates. This is where they ask a structured series of probes that are designed to get people to provide retrospective decisions of actual incidents.

Then they do this over and over, “progressively deeper,” to help the people in the domain actually talk about the challenges they experience and the strategies that they used. This is because, as we have all likely experienced, experts are not really good at pinpointing what it is that makes them an expert. They just operate from that frame.

Klein’s method includes some what if questions, like “What might a less experienced individual have done?” and this is to help expand what it is that they’re saying as far as cognitive demands. This approach has been used in a bunch of different applications from NICU to a naval pilots.

Roth says there have bene a lot of major advances for uncovering domain constraints and affordances.

This includes: “Identifying the goals means and constraints in a domain that define the boundaries with in which people operate”.

The point of this is to have some sort of representation afterwards that shows the results of the analysis, some sort of abstraction. It then gives you a way to derive what it is that the people or machines are going to be doing and then the cognitive activities that would take place and the challenges as a result.

Now, most importantly of course, is actually using the cognitive analysis to inform the design. I mean, if we just do this research and we don’t actually change anything or are we don’t take into account, it really doesn’t do us much good.

The author cites the CWA framework that was developed by Jens Rasmussen, which takes work domain analysis and builds on it. This whole area includes things like empirical looks at cognitive performance, identifying what strategies experts are using, how they’re working around things, and what strategies are already effective, especially those that are dependent on something happening in the environment that they would want to preserve or make sure is still there as new technology becomes available. The lack of this is what happens when automation doesn’t give feedback for. So a previous system might’ve given feedback if a lot of effective strategies rely on that feedback. The new system doesn’t have it, that would be bad, even though “better” tech may exist.

The author gives us a few examples of where design has considered a cognitive analysis: redesigning airborne warning and control systems (AWACS), weapons director stations, navy ships, power plants, anesthesia monitoring, things like that. But also Roth says, “cognitive analysis have also been used to support development of training, performance evaluation, analysis of contributors to human error and capture of corporate knowledge.”

Finally the author concludes, as you might expect, that depending on your domain, you might choose different CTA methods. It really depends by what your objectives and constraints are. Roth provides that if the goal is to look at leverage points where you might want to introduce new technology and have a positive impact, then you’ll want techniques that give a broad overview of cognitive and collaborative requirements and challenges. Perhaps field interviews, field observation, structured interviews.

On the other hand, if the goal is to make supporting systems for that work, then looking at the work domain using work domain analysis methods, that represent the goals and constraints, is going to be useful.

Alternatively, if the goal is to make training or to have some sort of assessment tools, to be able to rate people’s proficiency, then you’ll want probably want to use methods that are able to capture, in detail, the knowledge and skills that distinguish, a practitioner at different levels on. This comes back to think aloud protocols, for example.

“Whenever possible, it is best to use multiple, converging cognitive analysis methods”

“The specific selection of methods will depend on the goals and pragmatic constraints of the project”

Roth closes by saying that in the end, a collaborative effort needs to be made between analysts and domain practitioners. And these analytic methods give a different way for domain practitioners to show and talk about what it is they know, but also allows analysts to understand them.

Takeaways

  • Multiple disciplines have converged to offer a variety of tools to help us understand how experts perform
  • Which tools to use are highly dependent on what your goals are and the domain in particular
  • Much of the research supports that learning from real people in real settings, or at least in higher fidelity simulation is ideal.
  • The best cognitives analyses result from cooperation between analysts and domain practitioners

Don't miss out on the next issue!