Thanks for everyone’s patience as I went through one of the densest papers I’ve read in quite some time while navigating some health care things. I’ll be spending part of the holiday season updating the podcast and preparing for the new year. I’ll also be hosting more coaching sessions in the new year so keep an eye out.
How Not to Have to Navigate Through Too Many Displays
This is a chapter from the Handbook of Human-Computer Interaction by Woods and Watts. It goes over the issues that occur when using computer displays to try and figure out what’s going on in a system.
The chapter uses several case studies from the usual domains you’ve probably come to expect: medicine, nuclear power, and space.
Most of the issues with navigating through displays, whether its for patients, doctors, or for us in our dashboards comes from the gap between the view of the operator and the view of the designer. Though they don’t say it explicitly, I believe that this is still the case even if both of those roles are the same person!
I think that even when we design for ourselves or our team or people with similar jobs as us we can fall into the trap of overlooking the different pace at which incidents can unfold along with the extra cognitive work that dealing with that can entail.
The authors use principles from cinematography like “visual momentum,” how someone can follow cuts in a movie to show the loss of that visual momentum can disorient users navigating displays as well.
The focus here is specifically on tasks in context. This is important, because solutions that only create more efficient ways of navigating don’t address the additional cognitive work that those solutions can impose.
The authors give us some case studies that make it easier to see the various issues that can arise and understand what different functions and effective display must fulfil. Once we understand the functions that good displays fulfill, we can then help create better ones and evaluate our current ones.
This might seem like a lot to just to talk about displays, but as we work with our systems, whether during normal periods or in incidents, the displays we create and have available to us are the primary way we can understand them.
The Keyhole Property
The keyhole property is a part of computer displays (as opposed to say dedicated gauges). It’s like looking through a keyhole because your view (the monitor) can only represent a small space of all the possible data and views.
We look through the keyhole and only see a small part of the room behind it. This prevents you from seeing all the data at once, but can also interfere with seeing the links between the parts.
This is part of the flexibility of having a computer interface, but that doesn’t mean we can’t mitigate some of the downsides or even take advantage of that control to help users.
Workspace coordination is the analysis and design of displays and interactions. A big part of workspace coordination is making it so the user knows where to look when next while they’re trying to accomplish a given task.
Without workspace coordination there are dangers to the user such as:
- Getting lost in all the possible views
- Finding data when its needed, especially as events unfold.
- Becoming focused on managing the interface, instead of using to get something done.
we discuss ways in which designers can coordinate different kinds of display frames within a virtual workspace
This is essentially what we want to do. We want to be able to provide a coordinated workspace for our users (who, again, may be ourselves!) so that we can avoid getting lost and are able to find information we need as events unfold.
Visual momentum and flexibility
The “regular” way that designers tend to shift our gaze is across displays, by replacing one view with the next. The authors point out this is like “a poor cut in film,” its very sudden and can be jarring, there’s no visual momentum, since its a full replacement.
We’re left with no clues about what other options are now available to us as a result. We can’t tell what else is “nearby.” This default mode creates fragmentation, we’re looking at this one piece or some other piece, but never a cohesive larger view, only one bit at a time.
As a result, this changes the task we may have been trying to accomplish into something else entirely, we now need to:
- Search through the information, probably one screen at a time.
- Remember each time we need the information, where are all the pieces are.
- Because we probably saw one screen at a time we need to now integrate that data together
It can be tempting to say, well, increase flexibility of the UI would solve this, but that could work against the user. It shifts attention away from whatever we were trying to do to how to setup the UI.
This sounds easy and it can be of course when things are slower and not much else is going on, but during periods where a lot is happening and things are changing quickly, this extra cognitive work can be ill-afforded.
Next, we’ll take a quick look at each of the case studies. We won’t be going in-depth on any of them, but they’ll allow us to examine some important principles.
An infusion pump
A medical infusion device that was intended to help mothers with high risk pregnancies stay at home instead of in the hospital and control pre-term labor.
It contained 40 different displays, 7 at a top level, with anywhere from 1 to 7 below that.
The infusion pump will revert to the default display when bad instructions are entered, but it doesn’t inform the user how they got back to that default screen or what was bad about the instructions.
Monitoring space systems
In this system, the “raw data is the basic unit of display.” It shows the status of the system as different values. It’s made up a single physical display that has tiled windows. In addition, each of the values are color coded based on their status: white for normal, red when the component is being tested, purple if something abnormal was found in a test.
This probably all sounds familiar. Likely we’ve all worked with systems that are a lot like that if not even built a few. The trouble with systems like these is that they don’t tell you anything about why things are the way they are. Now when you see something that isn’t normal you have to decide where to look next and decide if its even worth investigating. Since it’s not context aware, whatever you were working on might be more urgent.
Next, you have to think through what other data would help you analyze the component problem, then think of how you might get that data. Then as you search for that, even more tasks might be created as result. For example, you might have to stop and declutter the ui.
Patient monitoring in an operating room
An operating room was moving from a system with many displays, each dedicated to a specific sensor to a single, very customizable display. Of course as we know, that it is customizable means it needs customizing and configuration.
When they researched they discovered that very few of the features were ever used. In large part this was because the surgeons would have the display configured in a static way with the most important information always in the same place.
This worked most of the time, but there were critical moments where it broke down. This was because some things were OK to be viewed one by one, serially. Where as for other views it was crucial that they be available in parallel.
Making more effective displays
Now that we’ve had a chance to see how the problems of the keyhole property manifest and how that can affect users, we’ll now examine some ways of fixing these things.
As a reminder, how you implement each of these strategies in your own system will depend on what tooling you use and where you are today, but for the most part all of these strategies will help you evaluate and improve your displays.
Since we said visual momentum is how a user can follow a long as views or data changes, we can start by looking at how to increase visual momentum, drawing on the case studies to show how those improvements might be made.
Increasing visual momentum
There are several ways to increase visual momentum in a system, the authors give us a few:
- Spacial dedication
- Side effect views
- Parallel vs serial views
I’ll break down each of these next.
A longshot is a place where the viewer can see the big picture. The authors tell us that a lot of the research points to these sort of overview displays as supporting navigation through available views, but as we’ve seen from the examples not all overviews are effective.
Effective overview displays provide:
- A status summary
- A way of orienting based in the semantics of the domain
Longshots as a status sumary view
These sorts of views are probably very familiar. In my experience these are often the first views that system designers will attempt to create for themselves or others.
The idea is that it allows users to take a step back from the details and assess the broader situation, often trying to understand the state of the system in question.
A good longshot or overview has context. It has relevant status information and can help you decide where to look next. It allows you to see the big picture without taking away your ability to focus on the details.
That’s what made the color coded overview from the space system so confusing. Sure you had an overview, but it obscured the details that we want while doing the work.
Effective overviews or “summary functions” must:
- Provide distilled information.
One way to check if the view and information you’re showing is concise is by shrinking it down somewhat and seeing if its still informative. Over-summarizing can result in there being too little information available to be useful.
- Provide abstracted information
The view should give higher level information, not raw data. It should allow “integration of detail that informs the observer about higher level questions” and show data from across the system, not just one area.
The abstracted information answers questions like:
- What mode is the system in?
- What activities are happening?
- Are subsystems normal or suffering a disturbance?
- Include information about change and sequence
This is in order to support pattern recognition. It can include information about what happened recently as well developing trends.
- Support “check reading”
You should be able to read the screen at a glance and get an idea of what is changing and what is interesting in a way that doesn’t take a lot of work.
features that are visible at a distance that provide information about location and orientation
Just like in the real world, landmarks help you orient yourself and understand where you are. If you become disoriented in the “space” of the data with no landmarks, perhaps by accidentally choosing the wrong view, it can be very difficult to get back on track.
Landmarks can have content themselves, which allows them to say something about the content around it, for example the summary status, but landmarks that provide structure with no content are useful as well.
Spacial dedication is a way of dealing with interfaces where you keep each part in the same place as much as possible. You’ve probably done this, especially if you have multiple monitors, the upper right might be for one thing, say slack and the center might be for your browser. This way you always know where to look to find or do certain things.
In both cases, with surgeons and mission control, the users spent time during less busy periods to set up their views to be fixed in place as much as possible so that when the busy periods would happen they could be ready.
This sounds like a good thing and it certainly is compared to the alternative of slowing down critical work in a high pressure moment, but it also tell us that the cost of navigating around the system is so high that they need to invest effort in advance in order to offset it. Further, this can create a vulnerability, where if some situation arises that makes it so your pre-set view doesn’t give you what you need, you then risk being sidetracked by messing with the interface itself.
These cases illustrate the repeated observation that users seem to prefer a spatially dedicated representation even if it is crude and deficient in other ways over keyholed computer systems despite apparent flexibility to call up many different features, displays, and options.
Center-Surround and side effect views
Another approach can be to have a high-resolution (as far as information is concerned) centered with “lower-resolution” or summary type views surrounding it. This mirrors how human vision and perception tends to work. This is also called the “focus plus context” approach.
If you’ve ever used a paper atlas, you might recognize this approach. The area of the world you’re focused on is in high detail, but the edges have lower detail and pointers to where you can see the more detailed views.
The more you know about the work being done the more powerful this approach is. If you’re able to represent data that is related in the surrounding then it’ll be much better cognitive support for the user.
Of course, without some knowledge about the domain this is impossible. Fortunately, we typically don’t face the same problems as researchers. We’re at least somewhat in the domain in question and usually have access to folks in other specialties questions and further develop our models.
When we have a better idea of the work we can then improve that surround to tell the user something about the status of the process or system as well. This way users can decide when to shift focus and attention.
Ultimately, as system designers, we get to make these choices.
Designers have to decide what kind of information at what level of summary and abstraction is appropriate for a surround.
Side effect views are a specific type of surround:
Side effect views provide users with information about distant areas of the information space that might be affected by actions they are taking or activities going on within their high-resolution central view.
The goal is to give users a global picture of the state changes, both main effects as well as side effects, that occur as a result of actions taken.
Parallel vs Serial
Often times, in order to accomplish something, different pieces of information will need to be available all at once. Unfortunately, even if that’s needed for the work, it doesn’t mean that the visualization system provides that. When it’s located in separate areas, users are forced to view each piece, one at a time. This causes the problem of “thrashing,” where users have to continually switch back and forth between views in order to do their work.
When we have an understanding of what it is that users are trying to accomplish, then we can allow users to see related views in parallel. This solves the problem itself as opposed to “improving” the interface itself to make navigation more efficient. Even with more efficient navigation the user would still have to thrash between views.
Taking the operating room example again we can see this play out. More efficient navigation doesn’t address the issue that there are two different kinds of information and work in the patient monitoring system. One is monitoring the vital signs of the patient, which happens throughout and is always important. But there’s also whatever specialized thing is happening at the moment for the procedure (say computing cardiac output).
In the operating room interface , both of these functions are competing for the same visual area, opening a window to help calculations obscures the vital signs. One needs to be up all the time, one is only needed in specific circumstances. This may seem like a trivial example, “well, just make both views available at the same time…”. If you found yourself saying that then you’re exactly right. While it can seem obvious in this context when we have all this information, we don’t always set ourselves up that way as system designers. We make interfaces like this all the time.
Also, while we talked about the two different tasks of the OR, note that we’ve actually introduced a third, getting the interface to show us what we need in the moment. This isn’t free, hopefully it’s low enough as to not impact the other work, but that often isn’t the case.
Designing coordinated workspaces
Woods and Watts give us some advice on how to avoid these problems in general as well:
- “Developers should study/analyze what views need to be seen in parallel”
- “Explicit representation of the workspace in terms of the kinds of views and their inter-relations is a prerequisite for the design of a coordinated workspace”
- “In general, users are likely to need to see specific kinds of views in parallel”
- “Designers should make provisions for users to be able to compose, save, and manipulate sets of views a coherent unit.”
When I think about the teams and systems that I’ve worked with in recent years, I think we as an industry have come along furthest in #4. Most of our dashboarding tools typically support this idea on some level.
“Success is more than helping users travel in a space they cannot visualize. Success is creating a visible conceptual space meaningfully related to activities and constraints of a field of practice. Success lies in supporting practitioners to know where to look next in that conceptual space”
- In many ways this is about how tools change the nature of cognitive work, both for good and for bad.
- Our displays, by their very nature, are afflicted by the “keyhole property,” they only allow some small fraction of the data to be viewed at on time, as if looking at a room through a keyhole.
- The typical way that system designers and developers deal with the amount of data is by adding more views, but this doesn’t help combat the keyhole property.
- Adding more views without considering the way people will use the system puts the cognitive burden of using it on the operator.
- Flexibility alone isn’t enough, that can create more work for the user, now they must continually configure the UI.
- “Visual momentum” is an important concept, borrowed from cinematography, that can help system designers, like us, understand how to evaluate the displays we create.
- A lot of this advice might seem a bit vague, but that is because it is mostly strategic, not tactical, and much of the actual implementation depends on what tasks, what work is being done when someone looks at the display.
- In order to design an effective workspace, it’s critical that you understand the work being done, what’s needing to be accomplished, but also in what context.
- One way to design better displays is to think of them as a map, and the display as a space that the user must navigate through.
- The longshot, also an idea borrowed from cinematography, is an important aspect of effective display workspace. Helps guide development and evaluation of displays.
- A balance must be struck between searching within a given display or across multiple displays.
Subscribe to Resilience Roundup
Subscribe to the newsletter.