The report from the ACR Task Force on Certification in Radiology is out. This is the American College of Radiology’s formal take on how the American Board of Radiology is doing exercising its monopoly on radiology certification.
It’s clear, concise, well-researched, and contains wonderfully diplomatic language. I admire the restraint (emphasis mine):
Fees have remained unchanged for initial certification since 2016 and MOC since 2015. We acknowledge there is a cost of doing business and reserves are necessary but increased transparency and cost effectiveness are encouraged.
This in reference to finances like this. Such a gentle request.
Radiologists are also concerned that there is absence of scientific evidence of value.
An understatement certainly written like it was dictated by a radiologist.
We congratulate the ABR for modernizing its testing platform for MOC Part 3. The move to OLA is a responsive change from feedback. However, we are not aware of any theory or research that supports how the annual completion of 52 online multiple-choice questions (MCQ) demonstrates professional competence.
Ooph. Boom goes the dynamite.
MOC critique is tough.
On the one hand, OLA is better than a 10-year exam based on sheer convenience alone. It’s a trivial task, and therefore I know many radiologists don’t want to complain because they’re concerned that any changes would only make MOC more arduous or challenging (a valid concern). Organizations would much rather increase the failure rate to stave off criticism about a useless exam than actually get rid of a profit-generating useless exam (see USMLE Step 2 CS).
On the other hand, what a joke. There is literally no basis for assuming this token MCQ knowledge assessment reflects or predicts anything meaningful about someone’s ability to practice. Even just the face validity approaches zero. (Of course, this argument could also apply to taking 600+ multiple-choice questions all at once for initial certification).
Scores on standardized tests have been shown to correlate better with each other than with professional performance. Practical radiology is difficult to assess by MCQs, requiring a much greater skillset of inquiry and judgment.
This relates to the only consistent board certification research finding: standardized testing scores like Step 1 are the best predictors of Core Exam passage. People who do well on tests do well on tests. And while certainly smart hard-working people are likely to remain smart hard-working people, it remains to be seen if Step 1, in-service, or even Core Exam performance predicts actually being excellent at radiology versus being excellent at a multiple-choice radiology exam.
The obvious concern is that it’s the latter: that the ABR’s tests do not differentiate between competent and incompetent radiologists, and that we are largely selecting medical students based on their ability to play a really boring game as opposed to their ability to grow into radiologists.
Successful certification programs undertake early and independent research of assessment tools, prior to implementation. This is a vital step to ensure the accurate assessment of both learner competence and patient outcomes.
Subtle dig with the use of the word successful, but this is the crux:
Assessments are not bad. Good assessments are a critical component of the learning process. Bad assessments are bad because they provide incomplete, confusing, or misleading information, a problem compounded when a preoccupation with doing well on said bad assessment then distracts learners from more meaningful activities (look no further than Step 1 generated boards fever).
Medicine and radiology should not be limited by legacy methodology. Recognizing that learning and assessment are inseparable, the ABR has the opportunity to lead other radiology organizations, integrating emerging techniques such as peer-learning and simulation into residency programs. Assessment techniques are most effective when they create authentic simulations of learners’ actual jobs, although such techniques can be time-consuming and resource-intensive to develop.
Yes.
And I’ll say it again: diagnostic radiology is uniquely suited–within all of medicine–to incorporate simulation. Whether a case was performed in real-life years ago or is fresh in the ER, a learner can approach it the same way.
Despite alternative certification boards, the market dominance of the ABMS and its member boards has been supported by a large infrastructure of organizations that influence radiologists’ practices. The ABR should welcome new entrants, perhaps by sponsoring products developed by other organizations to catalyze evolution, innovation and improvement to benefit patients.
Hard to imagine that alternate reality.
Although the ABR meets regularly with leadership from external organizations, such as APDR, the ABR could better connect with its candidates and diplomates by reserving some voting member positions on their boards for various constituencies.
As I discussed in my breakdown of the ABR Bylaws, there is a massive echo chamber effect due to the ABR’s promotion policy, which requires all voting board members to be voted in by the current leadership, usually from within the ranks of its hard-working uncompensated volunteers. This means that operationally, the ABR is completely led, at all levels of its organization, by people who believe in and support the status quo.
Meeting with stakeholders may act as a thermometer helping them feel the room. The recent inclusion of Advisory Committees that give intermittent feedback and the perusal of social media commentary may provide the occasional idea. But all of this information is, by the ABR’s design, put into a well-worn framework.
The ABR is designed to resist change.
No one has a vote who wasn’t voted to have a vote by those who already vote.
And that’s a problem.