If you’re at ASNR this year, I’m doing part of the session on Choosing & Navigating Your First Job tomorrow (Wednesday) at 1:15pm. Come say hi!
Clinicians can bill, at least to an extent, to account for complexity. When a patient walks into a clinic for an annual physical, an acute upper respiratory tract infection, or an endless litany of chronic complaints including uncontrolled hypertension, diabetes, and hypercholesterolemia, and an acute complaint, their documentation and the codes they use can differ between a brief med check and some more demanding undertaking.
Modality ≠ Complexity
In radiology, we don’t have complexity. We have modality. MRIs earn more than CTs, which earn more than ultrasounds, which earn more than radiographs. There is no distinction between a unindicated pre-operative screening CXR on a healthy adult and an ICU plain film. There is no distinction between a negative trauma pan-scan in an 18-year-old and a grossly abnormal pan-scan in a 98-year-old with ankylosing spondylitis, multiple fractures, and a few incidental cancers.
Leave aside adjusting actual reimbursement RVUs from payors and the government, which is beyond the scope of this essay and would require changes that are likely ultimately unhelpful in the sense that assigning RVUs for reimbursement is a zero-sum game: paying more for one thing will mean paying less for others. Yes, the reality is that some groups and some locations do have more complex cases than others, but capturing that in a fair way by a third party would be a substantial challenge and one with clear winners and losers. Reimbursement has never been fair: between the wide range of complexity and payor contracts, some doctors (or at least institutions) are simply paid more on a per-effort basis.
Internally, however, a group has limitless wiggle room to adjust internal accounting to reward effort and pursue fairness. Again, on the whole, in an ideal world, everyone receives a combination of easy and hard cases, and therefore everyone’s efforts will, on the whole, be comparable. In practice, this may not be the case in many contexts.
For example, a community division in a large university practice may not be reading the same kinds of cases as their counterparts working in the hospital. Some attendings work in rotations with junior residents, and some don’t. Different shifts and different silos across practices that involve different hospitals or different centers can vary widely, and even imaging centers in strip malls may draw different kinds of pathology by zip code and referral patterns. Even covering the ER may yield different sorts of cases with different issues, depending on the time of day. Deep night, high-speed MVCs at 3 am at your hospital may be different from the parade of elderly falls that come during the late morning. If all radiologists share in the differing kinds of work equally, no biggie, but especially in larger practices, that is not always the case.
Across modalities and divisions, it can be relatively straightforward to account for an internal work unit according to generalized desirability or typical time spent. A group might choose to bump the internal RVUs of radiographs and decrease them for some varieties of MRI. A group might decrease single-phase abdomen/pelvis CTs and increase chest CTs. A group might bump thyroid ultrasounds but decrease right upper quadrant ultrasounds. These sorts of customized “work units” based on average time-to-dictation are common.
But the problem of variable challenge within an exam type is thornier. Complexity varies, and preventing the peek-and-shriek cherry pick is a nontrivial task. A normal MRI of the brain for a 20-year-old with migraines is a different diagnostic challenge than a case of recurrent glioblastoma with radiation necrosis or progression.
Most of the metrics one could use to attempt this feat on a case-by-case basis are gameable and ultimately not tractable. If you use time spent to read the case, it’s very challenging to normalize across individuals with varying intrinsic speed, let alone the fact that someone can open an easy case and leave it open while dropping a deuce. I don’t think anyone wants to live in a world where Big Brother is tracking their mouse movements or other invasive surveillance. Radiologists have a hard enough time fighting the widget factory worker mentality enough as it is.
But even when everyone behaves nicely, having a system that accounts for tough cases would help with frustration, burnout, blah blah. No one likes to get bogged down in a complex case and feel behind. What constitutes a solid day’s work depends on how hard the work is.
Enter the eRVU
Here’s an example of how the scalability of AI could make an intractable problem potentially tractable: the use and application of LLMs on reports after the fact to create a complexity grid to help account for case difficulty. Such a model (or wrapper) could be trained to create a holistic score using a variety of factors like patient age, patient location, indication, diagnosis codes, ordering provider, prior exams and arduousness vs ease of comparison, number of positive or negative findings, and the ultimate diagnosis.
Now, obviously, such solutions—like all things—would be imperfect. It may even be a terrible idea.
For one, you’d have to determine how to weigh and differentiate between essential findings or extraneous details such that dumping word salad into a report does not increase the complexity score when it does not meaningfully add value. We all know that longer is not better. But I think if people are creative, viable experiments could be run to figure out what feels fair within a practice and how to drive desired behavior. It’s possible a big powerscribe dump with report data could yield a pretty robust solution that takes into account what “hard” and “easy” work looks like historically based on report content and the time it took to make it. Or maybe you need to wait for vision-language models that can actually look at pictures.
Again, a non-terrible version of such a product would be for internal workload and efficiency accounting, not for reimbursement. Think of it like the customized wRVU tables already in use but with an added layer that it would work across all exam types instead of just modality.
With an effortRVU, we could account for the relative complexity of certain kinds of cases within any modality. We could account for the relative ease of an unchanged follow-up exam for a single finding, and we could account for the very heavy lift that sometimes drives certain types of cases to be ignored on the list, like temporal bone CTs or postoperative CTs of extensive spinal fusions with hardware complications.
Providing good care for the most challenging cases should never be a punishment for good citizens.
(Yes I’m aware some institutions already use an “eRVU” for educational activities, meetings, tumor boards, etc. Accounting for non-renumerative time is also a defensible approach, but that’s not related to the challenges associated with variable case complexity itself.)
((It’s also worth noting that it’s also not hard to imagine a world where payors try to do things like this without your permission. Long term, how reimbursement changes for work in a post-AI world is anyone’s guess because all the current tools suck.))
Infighting & Fun
Any attempt to differ from the status quo, any variation of customization—whether simple wRVU tweaks or something more dramatic like this—is inevitably fraught. The more such a solution is based on messy human opinions, the more contentious the discussions would likely be. Everyone has an opinion about RVUs, and no one wants to see their efforts undervalued. Every tool is just a reusable manifestation of the opinions that go into it. For example, historically common complaints about the RVUs of interventionalists (often ignoring the critical clinical role our IR colleagues play and the physical presence it requires) are a cultural and financial problem, but probably not an AI one.
It’s worth noting that the desire to not “downgrade” work or deal with infighting is probably why many practices choose to change daily targets and bonus thresholds based on subspeciality, shift-type, etc instead of creating/adjusting work units. It’s the same idea tackled less dramatically from a greater distance.
Counting every activity (phone calls, conferences, etc) is also something that’s been deployed in some settings, but it’s easy to see how taken to extremes such efforts to reward behaviors can veer too far into counterproductive counting games and even tokenizing just being a decent person.
If there is a “right” answer, it may be specific to the company and the people in it, and adding complexity to the system has its own very real costs. Nonetheless, there is a strong argument to be made that some degree of practice effort to make sure that everyone’s work goes noticed and appreciated in a “fair” way is a step in the right direction for subspecialized practices.
Internal productivity metrics help prevent low effort output while smart worklists and other guardrails can ensure largely ethical behavior within the list. (But sure, theoretically, if you can solve case assignment, most everything else that matters should just even out in the long run.)
Ultimately, radiology is a field where, especially in large organizations, it can become easy to feel like an anonymous cog. Individualizing productivity accounting to truly recognize the hard, challenging work many radiologists do—and reward those who are willing to develop expertise and do a good job reading complicated cases—might help humanize the work.
(Or…maybe it would just be more stressful and counter-productive to get less credit for those easy palate-cleansers, I don’t actually know. I do know that this particular food-for-thought is bound to make some people very uncomfortable. You can tell me how far you think the gulf between possible and desirable is.)
Time for my annual update and bump of this post: like every other practice in the country, my group is also hiring!
American Radiology Associates is a 100%-independent physician-owned radiology practice in Dallas-Forth Worth (of which I am a partner/shareholder). We’re privademic: we have part of the practice that works with the Baylor Dallas radiology residency, and we have part of the practice that does not. I enjoy a nice mix.
We’re hiring for body, general, neuro, NM/PET, and breast. All of us in DFW work a hybrid on-site/remote schedule. We are also hiring teleradiologists for body/general imaging.
While our partners are generally in the Dallas/Fort Worth metroplex, we are also offering a 100%-remote partnership-eligible swing position.
The swing shift is 2 pm-10 pm Central Time, weekdays (M-F) alternating every other week + 13 weekends of call (yes that means mostly weekdays and not 7/7, and never any deep nights or super weird circadian-destroying hours). The shifts are a mixture of early outpatient and subsequent general (body + neuro) ED/inpatient work for our regional/community hospitals. Other schedule configurations could be considered on an employee or 1099-basis.
So if you’re in the market, come work with me and check out our great team in Dallas. If you’re interested, send your CV to careers@americanrad.com and CC me at ben.white@americanrad.com.
I was at the ACR last weekend (giving a talk about the current radiology practice landscape) and had the chance to enjoy one of the American Board of Radiology’s periodic update sessions, this time given by Executive Director Dr. Wagner (who, for the record, is a nice guy). There were several things that I would have loved to ask (or push back on) but wasn’t able to due to time. I thought I’d share some of that here.
On Representation
Dr. Wagner said that the reason they don’t have specific stakeholder representation slots—like a member from the ACR, trainees (RFS), or young professionals (YPS)—is that they believe those people would advocate for their specific stakeholders instead of patients, who are the true mission of the ABR.
The mission, for those who don’t know:
Our mission is to certify that our diplomates demonstrate the requisite knowledge, skill, and understanding of their disciplines to the benefit of patients.
Leave aside that such stakeholder advocacy/perspective would be perfectly reasonable and not necessarily at odds with the ABR’s mission, but if you happened to read my article on the ABR’s bylaws from back in 2019, you may recall how the election of the board of governors works. From the updated November 2024 bylaws:
Section 4.3. Election of Governors. Nominees shall be solicited from the Board of Trustees and the Board of Governors, and may be solicited from any appropriate professional organization. Professional organizations shall provide such nominations in writing. An affirmative vote of at least three-fourths (3/4ths) of the entire Board of Governors shall be necessary for the election of any nominee to the Board of Governors.
I would say this process is insular and suspect, and I don’t see any plausible argument for how this method ensures a board that is more “mission-driven” than any other method. Company boards are usually elected by shareholders and not the boards themselves because external accountability is critical for governance.
Dr. Wagner admits the average age of the board is, in his words, “very old.” But since the board leadership itself selects the future board leadership without external stakeholder accountability or vote, the control and trajectory rest entirely internally. Dissident voices have no power and no recourse to engender change.
(The ABR would tell you they take oodles of feedback via various stakeholder panels and work groups. I really, really, don’t think that’s sufficient. Of course the leadership is old when the process of getting on the board has often meant paying your dues through years of volunteering and ultimately being selected by the board itself to fill its own vacancies.)
If you’ve read the bylaws of the ABR, you would also know that the fiduciary duty of the board members is to the board itself—not to the field of radiology, not to patients, and not even to its stated mission. From Section 6.1, Conflicts of Interest:
It is the policy of this Corporation that the legal duty of loyalty owed to this Corporation by an individual while serving in the capacity as a member of the Board of Governors or Board of Trustees requires the individual to act in the best interests of this Corporation. Consistent with the duty of loyalty, a person serving as a member of the Board of Governors or Board of Trustees does not serve or act as the “representative” of any other organization, and said member’s “constituency” as a member of the Board of Governors or Board of Trustees of this Corporation is solely this Corporation and is not any other organization or its members.
Arthur Jones of Proctor & Gamble once remarked that “all organizations are perfectly designed to get the results they get.” The reality is that this composition, process, and mandate create an unnecessarily insular crowd of well-meaning people designed to make exactly the kind of system we have. Until this process is deliberately opened to demand outside perspectives, they will simply not occur with enough frequency to make a difference.
On Fees
The ABR was also asked about fees, particularly the fact that the fees for initial certification hurt when you are a trainee (here and here and here are a few posts about fees).
The answer was a bit of self-love about how fees have stayed flat despite inflation (without any consideration to whether they could ever, you know, go down).
I would point out, however, that enrollment in MOC has increased over the years as new diplomates graduate while many of those retiring had been grandfathered—such that revenues from MOC have increased over time.
According to the ABR’s 2023 Form 990, there were approximately 35,200 diplomates enrolled in MOC to the tune of $340 a pop. In 2017, that number was 27,000. The increases won’t continue as grandfathered radiologists eventually all retire, but the revenues from MOC have obviously increased. In 2017, MOC revenues were $9.2 million and total revenues were $17.4 million (a previous deep dive is here if you’re curious). In 2023, it was almost $12 million on total revenues of $18.36 million (this after, to their credit, CAQ exam fees were finally slashed from their ludicrous $3k+ price tag).
This means that MOC accounts for an increasingly large amount of money and a larger fraction of revenues over time. I believe ABMS fees should be decreasing, not increasing—especially for a small group like trainees, who make less, usually do not have assets, and should not be required to pay so much for initial certification, especially as MOC for attending physicians with higher income is now essentially an involuntary permanent revenue stream. Initial certification still costs $3,200—no small sum for a bunch of multiple-choice questions written by volunteers and administered over the internet.
Any inflationary increases in expenses are presumably mostly related to the salaries of their employees. The ABR spent $1.66 million on executive salaries, $7.67 million on other salaries and wages (+$1.1 million on retirement contributions and other benefits), and $1 million on travel.
If the ABR takes its fiduciary responsibilities seriously, I think we can make that math work a little better on behalf of our youngest colleagues.
The Return of Oral Boards
There was frank acknowledgement that the Certifying Exam is duplicative and useless (my words of course, not the way they said it), with a caveat that it was a well-intentioned mistake that seemed like a good idea at the time. I think it was pretty clear from the outset that the current Certifying Exam had no value after the very similar Core Exam.
So, Orals are back (my initial thoughts here). Whatever the downsides, there is no denying that oral boards will provide more value than the current Certifying Exam.
There was lip service paid to the consideration of alternative ideas, like perhaps AI could provide value in the certification process? But this was dismissed with a handwave that “If we can’t all agree on what a good report would look like, how could we have an AI do so?”—which is bizarre in the sense that the same problem arises when assessing communication and critical thinking in the context of an oral examination.
What was not acknowledged is that a knowledge-based MCQ assessment doesn’t test skill (a stated important part of that ABR mission). A small number of contrived oral board cases unquestionably adds a different and useful kind of information, but even this is a skills assessment in only a very limited fashion.
So the critical question remains: are we sufficiently assessing skill with any of our current or proposed exams?
If we are being forward-thinking, we could just acknowledge that multiple-choice knowledge assessments are suboptimal. They have been historically necessary and remain efficient but are increasingly inappropriate in our modern connected world to provide actionable information. Knowledge is simply only part of the game.
Simulation is the only way the ABR can really fulfill its mission, and we are clearly living in a world where we could employ batch grading of dictated or typed answers, forcing residents to make findings from real cases and then provide diagnoses or differentials instead of just picking one from a list.
We could do this without AI, but yes, we certainly could employ AI to help grade answers (if not necessarily full reports). This would be a much more meaningful assessment.
The goal of the ABR exam certification process is to differentiate between competent and incompetent radiologists, and I don’t think we’re there yet. As always, I am disappointed by the lack of imagination at play. We can do better.
A struggling post-bankruptcy Envision was so desperate to get out of their radiology business—presumably due to the impossibility of recruiting/retaining, meeting clinical obligations, navigating the general tumult, etc.—that they have agreed to “transition” their remaining practices/contracts to RadPartners. Sounds like they already shed the salable groups/assets/non-competes, so this is most likely a final liability dump/hand-washing endeavor. It’s unclear how much money exchanged hands for the remaining 400 rads and some desperate/unhappy hospitals.
I don’t think I was ever more uncertain about my chosen field than during the first couple of months of my R1 year. Coming off my intern year, I had gained in skill and responsibility, and I wouldn’t have been unhappy taking on a role as an internist during my PGY2 year.
I didn’t read all that much radiology material during my intern year and had no radiology electives because there was no radiology residency where I did my transitional year (an ACGME requirement). So when I began radiology at a new institution—with new people and a new hospital—it was a complete reset.
The first lecture I attended as a radiology resident was GU #3, the third part in a series of case conferences on genitourinary imaging, covering topics like intravenous pyelograms. I had absolutely no idea what was going on. That feeling—of being completely lost—defined much of my early experience in radiology. I lacked the foundation to get anything meaningful out of the lectures.
In the reading room, I spent a lot of time transcribing and editing reports—often repeating words I didn’t understand about anatomy I barely knew. We had a weekly first-year radiology resident interactive conference (a 2-hour pimp-session) based on chapters from Brant and Helms, but this meant I had to do additional reading on my own time, which didn’t always align with what I needed to learn for my rotation. The questions were always challenging and got harder until you failed. There was no escape.
Of course, in the end, it all worked out. At the time, I benefited from some slower rotations at the VA, which gave me some extra time to shore up my reading. And I kept plugging away, day after day on service, doing my best to understand what I was looking at and awkwardly dictate that into comprehensible English (hopefully quietly enough that no one could hear me).
It’s not weird to find radiology disorienting when you first start—it should be expected. The medical school process trains you for clinical medicine. Especially between third year, fourth year, and the intern year, you develop along a continuum that doesn’t naturally lead toward a career in diagnostic radiology.
Becoming a radiology resident is a step backward in personal efficacy. For someone who has done well in school, met expectations across multiple clerkships, and excelled on tests, it’s frustrating to suddenly feel useless.
Some people struggle with feeling like they’re not a “real doctor” in radiology because they are removed from direct clinical care for a large portion of their time. But that sense of detachment is even more profound when you can’t even do your job yet. You can only watch an attending highlight your entire report, delete it en bloc, and start from scratch so many times before your ego takes a hit.
Some attendings even dictate reports to you word for word as though you’re very slow, inaccurate, fleshy dictation software, and then judge your performance by how well you parrot everything back. This process can feel infantilizing.
But, as I’ve previously discussed in the craftsmanship mentality of residency training, I believe we can find satisfaction in our work by taking pride in doing it well.
Reading books is important. Doing practice cases and questions is important. Watching videos can be helpful. You absolutely must do the extra work to become proficient in radiology. You can’t just rely on the list gods to expose you to the full spectrum of pathology needed to adequately learn radiology and provide high-quality diagnostic care.
When everything feels overwhelming—the sheer volume of material, the anatomical complexity, the endless variations in pathology—the answer is to take it one scan at a time.
From the titular reference of Ann Lamott’s beloved Bird by Bird: Some Instructions on Writing and Life:
Thirty years ago my older brother, who was ten years old at the time, was trying to get a report on birds written that he’d had three months to write, which was due the next day. We were out at our family cabin in Bolinas, and he was at the kitchen table, close to tears, surrounded by binder paper and pencils and unopened books on birds, immobilized by the hugeness of the task ahead. Then my father sat down beside him, put his arm around my brother’s shoulder, and said, ‘Bird by bird, buddy. Just take it bird by bird.’”
You learn by doing. Every day is a learning experience. Every scan is a chance to learn a new anatomical structure or detail. Every pathology is an opportunity to expand your internal library of normal versus abnormal. Every case is a lesson—not just in recognizing the pathology present but also in differentiating it from other possible diagnoses. Yes, the work has to get done, but it can’t just be about getting through the work.
The key to being good at radiology—beyond hard work, attention to detail, and sustained focus—is realizing that taking it scan by scan isn’t just a temporary survival strategy for residency:
It’s the way we learn—when we’re right, actively reinforcing our knowledge, and when we’re wrong, absorbing the painful but essential lessons that only come from making mistakes over and over and over again.
I made the mistake of procrastinating on something more meaningful by reading a variety of random commenters on issues related to radiology. One type of flawed thinking stuck out: the all-or-nothing fallacy.
For example, as it pertains to artificial intelligence, the argument often goes, “AI will never replace a human in doing what I can do, and therefore I can ignore it.” Or, “I put a radiology screenshot into a general-purpose LLM and it was wrong,” or “our current commercially available pixel-based AI is wrong a lot,” and therefore, “I can ignore the entire industry indefinitely based on the current commercially available products.”
Leave aside the potentially short-sighted disregard for this growing sector because of its obvious and glaring current shortcomings. Even the current state of the art can have an impact without actually replacing a human being in a high-level, high-training, high-stakes cognitive task.
For instance, let’s say the current radiologist market is short a few thousand radiologists—roughly 10% of the workforce. Basic math says we could:
- Hire 10% more human beings to fill the gap (difficult in the short term)
- Reduce the overall workload by 10% (highly unlikely)
- Increase efficiency by 10%
The reality is, it doesn’t take that much magic to make radiologists 10–20% more efficient, even with just streamlining non-interpretive, non-pixel-based tasks. If only enterprise software just sucked less…
We don’t need to reach the point of pre-dictated draft reports for that to happen. There’s plenty of low-hanging fruit. Rapid efficiency gains can come from relatively small improvements, such as:
- Better dictation and information transfer. When dictation software is able to transcribe your verbal shorthand easily (like a good resident), radiology is a whole different world.
- Real summaries of patient histories.
- Automated contrast dose reporting in reports.
- Summaries of prior reports and follow-up issues (e.g., “no change” reports where previous findings are reframed in the customizable style and depth).
- Automated transfer of measurements from PACS into reports with series/image numbers.
- Automated pre-filling of certain report styles (e.g., ultrasound or DEXA) based on OCR of handwritten or otherwise untransferable PDFs scanned into DICOM.
These tasks, as currently performed by expensive radiologists, do not require high-level training but instead demand tedious effort. Addressing them would reduce inefficiency and alleviate a substantial contribution to the tedium and frustration of the job.
Anyone who thinks these growing capabilities—while not all here yet, nor evenly distributed as they arrive—can’t in aggregate have an impact on the job market is mistaken. And if AI isn’t implemented quickly enough to prevent the continued expansion of imaging interpretation by non-physician providers, the radiology job market will be forced to contend with a combination of both factors, potentially leading to even more drastic consequences.
When you extrapolate a line or curve based on just two data points, you have no real idea where you started, where you’re headed, or where you’re going to end up. Just because you can draw a slope doesn’t make the line of best fit meaningfully reflect reality or extrapolate to a correct conclusion.
Don’t fall prey to simple black and white thinking.
What is quality care, and how do you define it?
I suspect for most people that quality is like pornography in the classic Supreme Court sense—you know it when you see it. But quality is almost assuredly not viewed that way when zoomed out to look at care delivery in a broad, collective sense. Instead, it’s often reduced to meeting a handful of predetermined outcome or compliance metrics, like pneumonia readmission rates or those markers of a job-well-done as defined in the MIPS program.
The reality is that authoritative, top-down central planning in something as variable and complicated as healthcare is extremely challenging, even if well-intentioned. As Goodhart’s law says, “When a measure becomes a target, it ceases to be a good measure.”
In the real world, I would argue a quality radiology report is one that is accurate in its interpretation and clear in its communication. But without universal peer review, double reading, or an AI overlord monitoring everyone’s output, there is no way to actually assess real quality at scale. You can’t even tell from the report if someone is correct in what they’re saying without looking at the pictures. Even “objectively” assessing whether they are clear or helpful in just their written communication requires either a human reviewer or AI to grade language and organization by some sort of mutually agreed-on rubric. It’s simply not feasible without a significant change to how healthcare is practiced.
And so, we resort to proxy metrics—like whether the appropriate follow-up recommendation for a handful of incidental findings was made. The irony, of course, is that many of these quality metrics are a combination of consensus guidelines and healthcare gamesmanship developed by non-impartial participants with no proof they reflect or are even associated with meaningful quality at all.
We should all want the quality of radiology reporting to improve, both in accuracy and in clarity. Many of these problems have been intractable because potential solutions are not scalable with current tools and current manpower—which is why soon you’ll be hearing about AI for everything, because AI solves the scaling problem, and even imperfect tools over the coming years will rapidly eclipse our current methods like cursory peer review.
Everyone would rather have automated incidental finding tracking than what most of us are still using for MIPS compliance. Right now, it’s still easy to get dinged and lose real money because you or your colleagues omitted some BS footer bloat about the source of your follow-up recommendations for pulmonary nodules too often. Increased quality without increased effort is hard to complain about.
But even just imagine you have a cheap LLM-derived tool that catches sidedness errors (e.g. right abnormality in the findings and left in the impression) or missing clarity words like forgetting the word “No” in the impression (or, hey, even just the phrase “correlate clinically”). This already exists: it’s trivial, requires zero pixel-based AI, (and—I know—is rapidly becoming table stakes for updated dictation software), but widespread adoption would likely have a more meaningful impact on real report quality than most of the box checking we do currently. A company could easily create a wrapper for one of several current commercial products and sell it tomorrow for those of us stuck on legacy systems. It might even be purchased and run by third parties (hospitals, payors, Covera Health, whatever) to decide which groups have “better” radiologists.
But now, take it that one step further. We’ve all gotten phone calls from clinicians asking us to translate a colleague’s confusing report. Would a bad “clarity score” get some radiologists to start dictating comprehensible reports?
It’s not hard to leap from an obviously good idea (catching dictation slips) to more dramatic oversight (official grammar police).
Changes to information processing and development costs mean the gap between notion and execution is narrowing. As scalable solutions proliferate, the question then becomes: who will be the radiology quality police, and who is going to pay for it?
As we discussed in The Necessity of Internal Moonlighting, you can regularly need some extra manpower to maintain turnaround times or mitigate misery without the need for a full additional FTE shift on the schedule (or, alternatively, where you do need some real shiftwork but don’t want to press people into service without additional reward).
Take this recent article about “Surge” staffing in radiology as described in Radiology Business:
On-service radiologists utilize Microsoft Teams to contact available nonscheduled rads during periods of heavy demand. Team members who are available then can log on remotely and restore the worklist to a “more manageable length,” logging their surge times in the scheduling system in five-minute increments. Compensation is based on the duration of the surge and time of day when it occurs.
Just-in-time overflow help is an important use case of internal moonlighting, and doing this with less friction is exactly what LnQ is trying to facilitate and streamline.
Firstly, it should almost go without saying, but: you can do this.
I’d also like to acknowledge that nothing below is particularly noteworthy or novel advice. The Core Exam is like the other high-stakes multiple choice exams you’ve taken except for the fact that it has more pictures.
And, of course, the question of how to pass the Core Exam after a failure is mostly the same as asking how to pass it in the first place. Before we get further, I published a series of articles grouped here as a “Guide to the Core Exam” that lays out a lot of helpful information. There are some out-of-date passages about failing because physics and in-person details, but the core information is unchanged.
Acknowledge Luck
As you no doubt noticed during your first attempt(s), the questions on the ABR exams are somewhat arbitrary in what they include, so part of your performance is luck of the draw. While the difficulty is curated, the specific content breakdown is not a perfect cross section of possible topics. You can have the same diagnosis multiple times but then zero questions on broad swaths of important material. How your knowledge gaps line up can make a big difference.
Your performance on a given day is a product of your variable skill (energy, focus, attention, etc) and the exact exam you get. All things being equal, that means that a borderline failure is also truly a near pass.
Dissect Your Performance
Look at the two breakdowns: organ (breast, cardiovascular, GI, GU, MSK, neuro, peds, and thoracic) and modality (CT, IR, MR, Nucs, radiography, US, NIS, and Physics). Find if you have outliers, and plan to shore up those shortfalls with extra dedicated targeted review.
At the same time, do not neglect your strengths entirely. Backsliding is counterproductive.
The nature of spaced repetition is that you need more reps more frequently for new knowledge and problem areas and fewer reps spaced further apart for your strengths—but you still need reps across the board.
Further Reading: Learning & The Transfer Problem
Review Your Study Methods
What exactly was your study method and plan for your initial attempt(s)?
There are a couple of maladaptive tendencies common amongst medical students that can persist into residency:
- The tendency to graze across too many resources. Focus on fewer things and learn them deeply.
- The tendency to spend too much time passively reading (and especially re-reading) books like Crack the Core at the expense of doing lots of high-quality questions. We are radiologists, and the majority of the exam is image identification: you need to look at lots and lots of pictures.
When it comes to doing practice questions, you also need zoom out and look for trends:
More than just stronger/weaker subspecialty performance, are there any themes to why you get questions wrong? Is there a time component? Is it that you often don’t see the finding in the picture? That you simply don’t know the diagnosis? Or that you’re being fooled by differential considerations and need to focus on key features that distinguish between plausible alternatives? Is it a confidence issue and you’re overthinking it, getting spooked by questions that seem too easy? If you change answers, are you more likely to change from wrong to right or right to wrong? (I think most people overthink it and change for the worse).
If there’s a pattern, it can be the key to unlocking performance.
Further Reading: Dealing with Test Anxiety and Demoralization
Questions/Cases > Reading >> Re-reading
First: Horses > Zebras.
In general, the biggest bang for your buck is still going to be common diagnoses (including weird looks of common things) and normal variants over esoterica. Rare things arise most when they are absolute Aunt Minnies that you can tell at a glance (hence the need for lots of image-based questions).
On a related note, if you never saw them, the ancient free official ABR practice test from 2014 is available on the Wayback machine here.
Also worth mentioning: NIS is a bunch of random material. Many people can read the manual a copy of times and get the job done here, but the reality is that these need to mostly be easy points. If you don’t retain pseudoscientific business jargon naturally, then don’t shirk the review here. The NIS App, for example, is well-reviewed, but there is also an Anki deck as well.
Spaced Repetition
You can use the ACR DXIT/in-service Anki deck for a large number of good free questions. You could also use one of the massive Core Radiology decks. But for the second round of studying after a failure, making quick cards of every question you guess or get wrong from whatever source you’re using with your phone’s camera or with screenshots and incorporating that into repeated longitudinal review may be the highest yield.
In Windows, for example, Windows+Shift+S opens up a quick adjustable screenshot reticle that will copy that portion of your screen to the clipboard.
On Mac, the adjustable screenshot shortcut is Shift+Command+4, which automatically saves to desktop. To save to your clipboard, add Control, so Ctrl+Shift+Command+4.
The Reality
Passing is almost certainly a matter of doing more, high-quality reps while not over-focusing on weaknesses such that you backslide on your relative strengths. The Core Exam is arbitrary enough that some of your high and low-performance areas may not be as meaningful as you think, so you need to continue broad reps in addition to the extra targeted review.
Once you can emotionally recover from the setback and just get back to it, it’s going to work out.
Further Reading: Focused Nonchalance