When you look for the answers needed to confirm your beliefs, you can almost always find evidence. That doesn’t mean you’re right. It means confirmation bias is a real cognitive trap.
Radiologists (or clinicians of any stripe) need to constantly regulate and bring to consciousness balanced decision-making between observation and synthesis (putting together multiple findings to reach a conclusion) and anchoring on initial observations in ways that can impair objective analysis.
As in: is this additional imaging or clinical finding subtle or simply not there?
Imaging interpretation is a surprisingly noisy process. Sometimes we simply don’t know if a finding is “real” or not—we make judgment calls based on intuitive probabilities all the time. When findings make sense for a given clinical picture, we are more likely to believe them. Conversely, when we know what to look for, we are more likely to marshall our attention effectively and be able to identify subtle findings.
But: balance in all things.
There are two facets of confirmation bias that deserve their own discussion here: cherry picking and selective windowing.
Cherry Picking
You can’t retrospectively judge the likelihood of an event after the fact. This is part of the unfairness of Monday-morning quarterbacking and medical malpractice. You can’t predict the weather that occurred last week. Forecasting is a prospective process.
From Richard Feynman’s classic The Meaning of It All: Thoughts of a Citizen-Scientist:
“A lot of scientists don’t even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it’s a general principle of psychologists that in these tests they arrange so that the odds that the things that happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out.
This man had designed an experiment which would show something which I do not remember, if the rats always went to the right, let’s say. He had to do a great number of tests, because, of course, they could go to the right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And it’s hard to do, and he did his number. Then he found that it didn’t work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn’t count.”
He said, “Why?” I said, “Because it doesn’t make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”
The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn’t work.”
His conclusion?
“Never fool yourself, and remember that you are the easiest person to fool.”
This is also why when we evaluate a new AI tool, we don’t just judge how well it works on its training data. That information doesn’t help us predict how well it will work in the real world.
Cherry picking is seductive, which is why it’s so easy to fool yourself. We can’t just learn key lessons from post hoc judgments.
Selective Windowing
Selective windowing refers to the tendency to selectively seek and interpret the subset information that confirms our pre-existing beliefs or expectations while ignoring or discounting information that contradicts them. By analogy, a window constrains your view of the outside world.
The selective windowing of attention can dramatically skew decision-making.
I had an attending once who would review a case, and upon seeing one finding pointing in a direction, “see” several subtle supporting features to confirm a diagnosis. I assume some of this ability stemmed from experience and reflected true expertise.
But, some residents would also play a game during readout where they would describe the patient’s symptoms but purposefully not mention the side, and the attending would concoct a tidy narrative beautifully tying together a number of subtle observations. The problem, as I’m sure you guessed, is that frequently it would be the wrong side. The observations were only possible through that selective window. Too narrow a window and your view of the world is woefully incomplete and distorted. To torture another metaphor, the anchor of that initial observation sunk the proverbial diagnostic ship.
But, in practice, what a fine line to walk! Being sensitive to subtle manifestations of a complex process versus just seeing what you expect to see. Many radiologists have pet diagnoses that they call more than their colleagues. There are neuroradiologists who seem positively primed to see the findings of idiopathic intracranial hypertension or normal pressure hydrocephalus. Some of them are even assuredly better, more thoughtful radiologists. But some aren’t. Some will anchor on an initial observation and confirm their way to the story.
* * *
Attention is a finite resource. The world is too rich and vibrant to be seen unfiltered. We are always windowing, and when faced with important decisions, we must always seek to widen our window to consider competing information and address alternative explanations. Evidence is ubiquitous: it’s usually easy to find support for your preferred position, even when it’s wrong.