Resilience Roundup - Can We Trust Best Practices? - Issue #38

Hey folks, Thanks to everyone who came to say hi at SRECon APAC! Also, thanks to those who brought it to my attention that the link last week gave some trouble, here is an updated link: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.623.5749&rep=rep1&type=pdf

You’ll also notice a new section this week along with the takeaways. Sponsorship will allow me to dedicate more time and resources to getting the best material and teaching about it here week after week. Please know I didn’t make the decision lightly and this sponsor, like others have been carefully chosen; I’ve had the opportunity to talk to the founder, Robert, at several points along the way as he’s been building his product and team over at FireHydrant.io. Please also know that sponsors have no say in the content of issues.


Can We Trust Best Practices? Six Cognitive Challenges of Evidence-Based Approaches

This is a paper by Devorah E. Klein, David D. Woods, Gary Klein, and Shawna J. Perry that uses evidenced based medicine to examine the question of whether or not best practices are useful.

The EBM (Evidence Based Medicine) approach is essentially:

  1. identify treatment of interest
  2. conduct controlled studies, double blind where possible
  3. determine effectiveness
  4. create best practices as rules (e.g. if X, do Y)

EBM is the chosen example, because it’s an area where best practices has been pursued greatly.

On the surface it sounds pretty good and straight forward, use evidence to make decisions. But practitioners make decisions about how to interpret and then apply evidence, so where does that leave them?

As the authors say:

“our interest is in the cognitive challenges that confront EBM in the context of actual practice and the complexity of patients and diseases”

The things that make EBM attractive to practitioners, a way of helping in a field with uncertainty risk, and high time pressure are the very things that make it more difficult to employ.

Six challenges

  1. Characterizing Problems
  2. Gauging confidence in evidence
  3. Deciding what to do when the generally accepted best practices conflict with professional expertise
  4. Applying simple rules to complex situations
  5. Revising treatment plans that do not seem to be working
  6. Considering remedies that are not best practices

“In it’s extreme forms, EBM suggests a conflict between the use of evidence and the use of experience”

Characterizing problems

The very idea that a rule be can be applied like “if there is a condition, apply a certain treatment,” doesn’t take into account the ability to decide if such a condition is present.

Once we know the exact problem we’re facing, then picking a solution or treatment can be the easy part. This is the part that EBM addresses and ignores the harder parts: how do you identify the problem and its interactions?

Gauging Confidence in the Evidence

Choosing a treatment often means judging how relevant the evidence available is and its quality. This is further complicated by the time pressure and uncertainty in which these judgements take place.

One example of some of the trouble in this area is peptic ulcer disease. It was originally thought to be brought on by extra stomach acid due to stress. And best practices formed accordingly.

Barry Marshall researched the hypothesis that it was instead cause by H. pylori bacteria. But no cultures turned it up. It turns out that this was because the samples were getting discarded after a few days. Later when they were given more time the bacteria was present and the link proven, with best practice changing accordingly from surgery towards antibiotics.

The authors also mention the symptoms of heart attacks, where a study was conducted on primarily white males. Later it was learned that in women the typical differentiating symptom “feels like an elephant sitting on my chest” only occurs 5% fo the time for women.

This of course shifted best practice (as an aside, this is something that I was taught to be aware of in the field, that women experience heart attacks differently. I wonder now if best practice has since shifted further.)

This all shows that many of the issues are not black and white, research is ongoing. Practices change.

Dealing with this uncertainty is a key finding in cognitive engineering, that individuals, teams, and organizations need to revise their thinking and assessments in light of new evidence.

When best practices conflict with expertise

Feedback is easily available for common conditions, but not rare ones

in complex situations, heuristics and pattern recognition are essential and cannot be replaced by sets of rules

Applying simple rules to complex situations

Problem solving takes place in both simple, or “well-ordered,” and complex situations.

The authors use the example of inserting a central line. This is a procedure where most of the tasks are unambiguous and the success criteria are clear. As a result, using a checklist has been effective in reducing infections.

However other problems are not so ambiguous. Take the provided example of a patient with both asthma and diabetes. Typically a severe asthma patient would require steroids, but that will drive up their blood sugar.

So the physician can’t just follow a “diabetes protocol” or an “asthma protocol,” they must make trade offs and take into account the individual patient, not just the “average patient”. There isn’t an unambiguously right solution, though some of course are better than others.

Though physicians have to keep in mind their individual patient, the evidence available to them is typically about the “average patient”. What may be effective or ineffective in a general population may not hold true for the individual.

The draw of best practices is often simplicity, but this simplicity is limiting when considering a complex individual case.

Revising treatment plans that aren’t working

EBM doesn’t lend itself to plan adaptation. Since it tends to take the form of “given these symptoms, do this thing,” then there isn’t room for to respond when something isn’t working or a patient’s condition changes.

Trying to reconcile those two worlds, the static part of EBM with the need to adapt for the individual patient can be difficult for those that want to use these “best practices”.

“Practicing physicians who want to adhere to EBM have to wrestle with several challenges: the need to pick up early signs that a best practice is not producing the intended effects, early signs of expectancies that have been violated. They have to determine when and how to gather evidence to test these concerns and when to revise or withdraw best practice”

Essentially doctors need to balance revising their plan when its not working with experience and patiences to know when more time is needed.

As we’ve covered before in reference to other domains, knowing when to change a plan is a difficult choice.

Some of the research indicates that revising a plan can be harder than starting one. Once the plan is already begun telling the difference between the condition and the effects of the selected treatment can be difficult.

EBM doesn’t address issues of cognitive work in changing plans at all.

Considering treatments that aren’t best practices

What about when best practices and guidelines don’t cover the situation being faced? What about evidence that doesn’t come from studies? What about studies that give some information, but might not be considered rigorous enough? EBM doesn’t address this, but a judgement still needs to be made as to what evidence to consider.

What about learning from high achievers in the field (“rock stars”)? Of course, sometimes this can be beneficial, but also carries the risk of teaching the wrong lessons if the learning is reduced to just “naive copying” which EBM sometimes is.

Improve best practice with cognitive engineering

There are eight directions that the authors suggest that could improve best practice approaches, borrowing from cognitive engineering:

  1. Developing and sustaining expertise
    • This helps balance evidence and expertise
    • This includes continued study and practice
  2. Support adaptation
    • Make it expected that experts will adapt knowledge, not just follow a rigid ruleset
  3. Combine evidence with experience
    • As in 1, there is no need to choose one over the other, they can be combined.
  4. Balance generic evidence with experiential evidence
    • Evidence from general populations can be useful, but so is experience derived from individual cases.
    • Presenting or visualizing generic evidence in different ways can help this
  5. Represent Evidence
    • Simply publishing and reading papers is not sufficient.
    • The information needs to be presented in ways that can be more easily understood so that they can help people to actually use it. (I may be a bit biased in agree with this one :D )
  6. Appraise evidence
    • Not all studies provide the same quality of evidence, nor will all of today’s evidence seem applicable in 5 or 10 years.
    • Choosing between what evidence to believe and how much confidence to believe in it with is a skill that needs to be developed.
  7. Share evidence
    • Evidence needs to be shared in ways beyond just papers and publishing
    • These can be informal methods such as chat rooms (Slacks and such perhaps)
  8. Support collaborative decision making
    • Best practice should not be reduced to just looking at the individual, but should take into account how teams and organizations function.
    • Also, who the “team” is can be expanded, in the case of medicine it should include the patient.
      • Best practices are useless if the patient doesn’t want to or isn’t able to follow along

Takeaways

  • Using “best practices” is an oversimplification that ignores the cognitive work still required in complex fields and situations.
  • There are some cases that are too complex for “best practices”
  • Ignoring expertise, experts, and their hard won heuristics is to overlook a great source knowledge
  • Expertise can be used in addition to guidelines, but guidelines cannot replace expertise
  • No matter how strong the evidence, it cannot stand alone, practitioners still need to interpret it and decide when and how to apply it.
  • Rule based best practices are most effective where context doesn’t matter much (e.g. how to place a central line)
  • Keeping in mind these challenges can help keep us from developing similar rule based “best practices” that then diminish or devalue expertise and experience
  • All this is not to say that evidence should be discarded. The idea of best practices is a good one in that it can help fields not just rely on anecdotal solutions entirely.
  • Best practices come with their own challenges though, they’re not a fix for all things. Especially in situations of high variability and uncertainty.
  • “We should regard best practices as provisional, not optimal, as a floor rather than a ceiling”

Don't miss out on the next issue!