Modern Illustration is a delightful online archive of vintage (c.1950-1975) illustration curated by Zara Picken.
Via The Wealth Ladder by Nick Maggiulli:
As Robert Sapolsky wrote in Why Zebras Don’t Get Ulcers: When we sit around and worry about stressful things, we turn on the same physiological responses—but they are potentially a disaster when provoked chronically. A large body of evidence suggests that stress-related disease emerges, predominantly, out of the fact that we so often activate a physiological system that has evolved for responding to acute physical emergencies, but we turn it on for months on end, worrying about mortgages, relationships, and promotions. If you find yourself chronically worried in life, finding ways to destress will be paramount for keeping your mental wealth intact. Studies show that exercise, sleep, meditation, yoga, as well as various mindfulness techniques can help to reduce stress. Whatever you decide to do, finding what works for you is what’s important. Lastly, focusing on yourself and your self-esteem is another key component of mental wealth. And for many, self-esteem is built on how they perceive their social status. If you believe that you haven’t accomplished anything, then you will probably feel low status. But if you believe that what you do has purpose, then you will feel high status. More importantly, your perceived status does not need to be based on money or career accomplishments. You can find status and self-esteem in many areas of life. Once again from Robert Sapolsky: So, the lowly subordinate in the mailroom of the big corporation may, after hours, be deriving tremendous prestige and self-esteem from being the deacon of his church, or the captain of her weekend softball team, or may be the top of the class at adult-extension school. One person’s highly empowering dominance hierarchy may be a mere 9-to-5 irrelevancy to the person in the next cubicle.
You get to, as suggested by Calvin & Hobbes creator Bill Watterson, “recreate constructively” and construct life’s meaning. Unlike most games, the status game allows you to choose how to keep score.
Status is relative to the context in which it is being evaluated.
In some contexts, the aspirations of status can even backfire. The trappings of wealth, in the wrong setting, can function more as a cloak of shame. Being deliberate in your context (career, schools, neighborhood, etc) has an incredible impact on perceived status. As in, which Joneses are you trying to keep up with?
Thankfully, you get to choose which status game you want to play and how you want to evaluate yourself. This is both a blessing and a curse, because you can be objectively great at something and feel like a failure or you can be just okay at something and feel like a massive success. It’s all based on how you feel. It’s based on the story you tell yourself about yourself.
I am reminded of David Brooks’ distinction between resume virtues and eulogy virtues.
On living in a tumultuous, uncertain world, from the Free Press’ Abigail Shrier:
Overinvest, never underinvest, in people around you and in those you love. Particularly in a world with poor visibility, they are the closest thing any of us has to security. Give the community you inhabit a real shot and make a go of ensuring it succeeds.
If crime skyrockets, if you must leave the city, if our great civilization goes bust, you won’t regret the time you spent taking your kids and their friends out for ice cream. You’ll all carry with you memories that will bolster you wherever you go. Do all these things right up until the very moment you decide to make a change.
Here is the tough love you requested: You won’t regret having wholeheartedly invested in your home and community for as long as they are yours. I can almost guarantee, at the end of your journey, you won’t lament, “Civilization collapsed. I needed to flee. And meanwhile, I’d just bought this damn sofa.”
A few further thoughts about NVIDIA CEO Jensen Huang’s farcical description of the impact AI has “already” had in completely revamping the field of radiology.
Huang presents radiology as the “evidence” of what the near-future impact of AI on the workforce will be. I’ll include the quote again in this post for completeness:
One thing that I will say, give you some evidence, is that, and I was just telling Elon about this earlier, radiology, for example, has largely been converted to AI-driven radiology. And there’s some really great companies doing that. And the surprising thing is the prediction that all radiologists would be the first jobs to go was exactly the opposite. The trend shows that there are more radiologists being hired now as a result of AI.
And the reason for that, if you take a step back, it’s because the goal of a radiologist is not to study the images. The goal of a radiologist is to diagnose a disease. Now the studying of the images became so productive they could study more images, study more modalities, spend more time with the patients, and as a result, they were actually accepting more patients. We’re doing more radiology all around the world, we’re doing a better job with diagnosing disease.
Again, none of that is even merely an exaggeration. It’s just wholly untrue as to the current use of AI in radiology, let alone the impact far enough in the past to have percolated through and already changed how the whole field practices.
This is a man personally worth almost $200 billion in charge of a company with a market cap over $4 trillion. One imagines he has access to reality if desired.
I’ve seen the clip shared countless places by credulous people who don’t know any better. This is not to say these things won’t happen, but the use of the past tense is a real problem. I think it’s worth putting the fantasy in context.
Radiology Isn’t Illustrating Jevons’ Paradox
Jevons Paradox is the observation that technological advancements that increase resource efficiency can counterintuitively lead to an overall increase in resource consumption. Jevons’ original formulation concerned improving coal technology leading to paradoxically increased demand/consumption of coal. As in, when it’s cheaper, you can buy more and do more.
Huang is parroting a recently commonly invoked human radiology analogy: AI makes radiologists more efficient in interpreting scans, more scans somehow get done, and voila demand magically increases.
The problem is this doesn’t reflect reality, at least not currently. Imaging volumes have been increasing steadily for decades. Scan acquisition time has nothing to do with scan interpretation time (the former actually ironically is benefiting from machine acceleration in some situations). Interpreting efficiency has barely moved, if at all, and turnaround times are actually lengthening amidst unmanageable volumes. Increased demand for radiologists is just the secular shortage of qualified radiologists struggling to keep up with organically increasing volumes. AI has nothing to do with it.
To further compound how much we are not in Jevons’ land yet: reimbursement for radiologist professional services is stable to falling from payors (again, not related to efficiency), but radiologists have actually cost the system more recently as they demand stipends from hospital systems to provide continued access to services. More simple supply/demand at work. Those scans, even the “more efficient” ones, do not cost less for patients or payors yet. More efficient MRIs and radiologists are both paid the same on a per-study basis.
There are some AI tools for radiologists with reasonable market penetration: variations on list triage (enabling potentially positive scans to jump the queue for interpretation) and generative draft impressions derived from the findings section of the report. Meaningful computer vision is frankly not in particularly widespread use despite what is parroted by tech companies using the news as free marketing, and so far limited to narrow pathologies like brain bleeds, blood clots, fracture detection. Any purported efficiency benefits have been unconvincing. A mediocre tool allows careless people to move faster but gives careful radiologists just another thing to review. So far, the state of the art has been mostly a wash.
Again, this isn’t one of those “AI could never do what my amazing organic brain can do!”; it’s an “AI really certainly hasn’t yet in real practice, and nothing on the market so far has really moved the needle.”
Greater efficiency could trickle down to depress prices in the future and/or eventually lead to increased demand for imaging, but it hasn’t happened yet, and it’s unclear if it did that it would lead to increased demand for the human component. Healthcare is not really a free market. No one in healthcare is the equivalent of coal.
Is it possible that one day AI will lead to automated scan acquisition and instantaneous scan interpretation a la Star Trek, and we will suddenly not just want but need All Scans All The Time, and everyone will be building more and more machines to scan more and more people because interpretive costs are down, and physical throughput is the real bottleneck? Sure. We can just put a conveyor belt at the entrance to the ER!
Is it also possible that—instead of a Jevons process—professional fees could crash as radiologists become box-checking liability-operators, but that as the field contracts, the smaller number of remaining radiologists will enjoy persistent high wages in the form of obscolence rents? Also, sure.
Is any seismic AI change, if it occurs, going to be limited to radiology? I doubt it.
Ultimately, humans are not quite a natural resource. The tortured metaphor may or may not hold in the future, but we should all be able to agree that it sure hasn’t happened in the past.
The Obsession with the Jevons
So why is the world—and everyone contributing to the AI bubble—so obsessed with Jevons’ Paradox?
Because it’s comforting.
Instead of worrying that the economic value of most humans is going to trend toward zero and that our entire society will have to contend with massive disruption, it’s far easier to believe that AI efficiency will unlock the magic of productivity. That somehow, this machine intelligence revolution will set us free to do our best, most magical, most human work.
A narrative where increased efficiency leads to increased demand soothes the uncertain soul. In that version of the story, the highest-skilled humans won’t be made obsolete—they’ll be unleashed.
I don’t know if that logic will hold in real life over the coming years or not. What I do know is that invoking it right now is a marketing fantasy.
To reiterate: I don’t know what will happen to radiology or to any other field. The future is uncertain, and I certainly don’t have a crystal ball. We have plenty of bullshit jobs already, even without the help of AI.
What I do know is that all attempts to pretend radiology has already seen the fruits of AI and absorbed them—that AI is responsible for the current surge in demand for radiologist services—are lies.
But we can appreciate why this specific variation of the lie is being told:
In 2016, AI Godfather Geoffrey Hinton famously said:
People should stop training radiologists now—it’s just completely obvious within 5 years deep learning is going to do better than radiologists. It might be 10 years, but we’ve got plenty of radiologists already.
So the fact that we’re still here and in more demand than ever is supposed to be comforting to other humans.
Don’t fear AI! Even the radiologists are thriving!
I’m sure that in advance of an earnings call, a narrative where AI unlocks human potential is so much more compelling than one with a zero-sum game where the short-term economic value goes to tech companies but the long-term impact is to potentially destroy human enterprises and sink the economic value of previously high-training, high-skill, once-human tasks.
If we think further ahead, no one is going to pay for computer work the same as we did for comparable human work for a sustained period. The RVU system in healthcare already attempts to account for effort, liability, etc. We don’t get to hold even that one system’s current balance fixed while the world changes. Causes yield effects, and consequences themselves have consequences.
Anyone who imagines a perpetual money-printing machine generating revenues to match the humans they once replaced is naive, so maximum AI utopianism (and the value tied up in these companies) doesn’t envision that world. The devastating disruption fears may or may not be valid, but Huang and others in the tech world clearly feel compelled to address them.
It seems to me that Huang is trying to wave away generalized replacement fears by pretending that radiology is the canary in the coal mine and we’re still here, therefore rainbows and unicorns.
Maybe that “doomer” path to darkness is wrong, but that doesn’t mean radiology is the example to light the way.
Paul Ford, writing in Wired about post-bubble normalized (i.e. boring) AI:
People often wonder how to get back to the vibes of the early, heady days of the internet. It’s easy: Crash the global economy and leave lots of young people with keyboards and spare time. Make it boring. That’s what’s interesting.
A parable about greed, as relayed by a 17th-century Jewish widow Gluckel to her children:
As it is told of Alexander the Macedon who, as everyone knows, travelled and conquered the whole wide world: Whereat he thought to himself, «I am such a mighty man and I have travelled so far, I must be near to the Garden of Eden.» For he stood by the river Gihon, which is one of the four rivers that flow from the Garden.
So he built himself stout ships, boarded them with all his men, and through his great wisdom reached the fork where you enter towards Eden. When he neared the Garden itself, a fire came and consumed all the ships and men, save Alexander’s own ship and its crew.
He now strode to the gate of the Garden and begged to enter, for he wanted to see all the wonders of the world. And a voice answered him and bade him depart, for through this gate «only the righteous may come in.»
After Alexander had pleaded some while in vain, he finally asked that something be tossed him from over the wall, that he might show it as a token to prove that he had at last reached the gate of Eden.
Whereat an eye fell at his feet. He picked it up, without well knowing what to do with it. And a voice told him to heap together all his gold and silver and other goodly possessions and pile them in one scale of a balance, and then lay the eye in the other scale, and the eye would outweigh all the rest.
King Alexander was, it is well known, a great philosopher and a wise man, as his teacher Aristotle had trained him to be, and he sought to master all manner of wisdom. He was loath to believe that a little thing like an eye could outweigh so much heavy gold and silver and other goodly possessions, and he set about to see if it were true.
He brought him a great and mighty pair of balances, and placed the eye in one of its enormous scales. And in the other he poured hundreds and hundreds of gold and silver coins, but the more he poured the higher rose the scale and the eye proved heavier and heavier. And in wonderment he asked the reason.
Then he was told to put the tiniest speck of earth over the eye. He did so, and at once the eye rose as though it weighed a feather, and the scale with the gold and silver came tumbling to the ground.
In greater wonderment than ever, he asked how this came about. And the voice replied:
«Hearken, Alexander! The eye of man, so long as he lives, is never full. The more a man has the more he wants. And therefore the eye outweighs all your silver and gold. «But once a man dies and a speck of earth is laid over his eye, the eye is satisfied.
«Behold, you may see it, Alexander, in your own life. You were not satisfied with your kingdom and needs must travel and conquer the whole world, till you have come to the place where are the servants and children of God.
«So long, then, as you live you will never be satisfied, and you will always want and take more and more, till you will go and die in a strange land, and not so long now either. «And once you are placed in the earth, you will be content with six feet of ground, you for whom the whole world was too small.
«Go at once, and speak nor ask no more, for you will not be answered.»
So Alexander sailed with his ship to the land of Hodu, where he presently met a terrible and bitter death. For he died of poisoning, as his teacher Aristotle tells us in his history.
A penny honestly earned is hard to part with. But man must learn to control his greed. For ’tis a universal proverb, «Stinginess never enriches and measured generosity never makes one poor.» To everything there is a time—a time to get money and a time to give.
The opening of Tony Judt’s Ill Fares the Land, written in the wake of the financial crisis of 2008:
Something is profoundly wrong with the way we live today. For thirty years we have made a virtue out of the pursuit of material self-interest: indeed, this very pursuit now constitutes whatever remains of our sense of collective purpose. We know what things cost but have no idea what they are worth.
There is a word for happiness in Hebrew, simchah, that is often used to describe a festive occasion: a wedding, the birth of a child, a bar mitzvah, so on. The defining feature of true joy is that it is best shared; it is not diminished through division.
Happy Thanksgiving.
This week, Elon Musk and Nvidia’s Jensen Huang discussed AI and the future of technology at the U.S.-Saudi Investment Forum. Here is Jensen Huang discussing radiology:
One thing that I will say, give you some evidence, is that, and I was just telling Elon about this earlier, radiology, for example, has largely been converted to AI-driven radiology. And there’s some really great companies doing that. And the surprising thing is the prediction that all radiologists would be the first jobs to go was exactly the opposite. The trend shows that there are more radiologists being hired now as a result of AI.
This is sheer unadulterated fiction. Leave aside the fuzziness of what “AI-driven radiology” might mean; AI simply doesn’t drive a meaningful part of the radiology workflow. Some AI list triage and a few algorithms to detect intracranial blood or fractures has not changed the game in even the slightest of ways. The only thing that has been in meaningful if still limited use over the past few years that has arguably driven even small efficiency gains is generative AI for drafting impressions based on dictated findings.
There are, of course, more things in rare use and plenty of things announced that might matter, but that is beside the point here: Jensen is wrong.
More radiologists are being hired because there is a shortage of radiologists due to steady, some would say incessant, imaging volume growth. The heavy utilization of CT and MRI in US healthcare has literally—and I do mean literally—nothing to do with AI.
The only thing AI could potentially have to do with hiring more radiologists is faster MRI scanning with some vendors/machines/protocols allowing for more patient throughput, impressive but still in its infancy and with limited market penetration.
And the reason for that, if you take a step back, it’s because the goal of a radiologist is not to study the images. The goal of a radiologist is to diagnose a disease. Now the studying of the images became so productive they could study more images, study more modalities, spend more time with the patients, and as a result, they were actually accepting more patients. We’re doing more radiology all around the world, we’re doing a better job with diagnosing disease.
Sure, the goal of a radiologist is to diagnose disease. But, let’s not pretend that radiology hasn’t, for the past century since its inception, essentially been the art and science of diagnosing disease through studying images and putting imaging findings in context. What Jensen is saying is just nonsense.
Even if we accept that the fraction of radiologists using clot- and fracture-detection tools is doing a better job diagnosing disease (very unclear), we are not, as a field “study[ing] more modalities” (what?!) or “spend[ing] more time with the patients” (less than ever thanks to heavy volumes, long turn-around-times, and the explosion of teleradiology). Current computer vision tools do not make radiologists significantly more efficient unless they inappropriately trust them enough to stop looking at the images.
And so that’s kind of the near term outcome of AI and productivity.
I don’t know if Jensen actually doesn’t know anything about radiology (everyone’s favorite white-collar AI-replacement use-case) or if this is a cynical don’t-fear-the-future puff angle. But either way, he’s wrong across the board.
I am reminded of Michael Crichton’s Gell-Mann Amnesia, where you realize how useless many perspectives and most news are only when confronted with obviously incorrect information in contexts for which you are a subject matter expert. Almost every single news article about AI and radiology is entirely wrong. They’re acting like what the world could look like in the coming years is what has already happened: that we are awash in game-changing, useful AI that has rapidly been deployed across the field and fundamentally altered the practice of radiology. And that’s not true. Ironically, clinicians using LLMs for documentation is probably far more ubiquitous and impactful so far.
Now being wrong about the present doesn’t necessarily mean being wrong about the future. Things are changing fast, and the future is always largely unknowable.
But the reality distortion is just so damn irritating.
This week, the ABR quietly dropped a big change in their long-term plans for the new oral board version of the Certifying Exam. After the very first administration in early 2028 during fellowship for the class of 2027, subsequent administrations will occur at the end of residency:
That’s the email I got as a program director.
As in, in 2028, diagnostic radiology, as a field, will again be graduating board-certified (not “board eligible”) radiologists.
The decision to change the (useless, duplicative) Certifying Exam was first announced back in February 2023. In April 2023, they then announced their intention to bring back the oral boards.
The original plan was to keep the timing the same despite the change in format, so that residents would take the exam during the calendar year after graduating from residency, typically a few months into their first post-fellowship attending job. Despite the reality that orals would be much harder to prepare for outside of the residency training environment than a written exam, the ABR referred to this timing as “the least bad choice.”
In that “Backwards to the Future” article, I wrote:
This exam needs to be at the end of residency like it used to be. If anything, it might help combat the post-Core senioritis that many fourth-years struggle with, particularly when rotating through services outside of their chosen specialty. I appreciate that many program directors don’t want this during residency because in the past seniors used to disappear from service (and especially the call pool) before Orals just like they do now before the Core Exam. It’s easier to run a residency with only one class preparing for one big test at a time. But convenience shouldn’t be our primary metric.
Time will tell. I think I had it right in 2023, and clearly enough stakeholders agreed that the ABR has changed its plan before even doing it a single time.
In order to prevent two classes disappearing concurrently in June for their respective boards, the Core Exam has been pushed back into early/fall R4 year so that the senior year will now contain both board exams. Even with that scheduling mitigation, residencies have a lot of work to do to make this happen.
America after World War II entered an era of economic growth and intermittent societal cohesion. It also created the American consumer, an economic system based in large part on earning money to buy things, at first to change your life (a washing machine!) and then to signal to others (another new car?).
That simple instinct—to not fall behind—combined with easy credit and rising expectations, helped fuel the debt culture that still defines American life. Market capitalism naturally creates winners and losers. When growth slows or inequality widens, that sense of relative deprivation becomes a psychological trap. As in, success isn’t objective; it’s relative.
The Great Disgruntlement of the modern era has less to do with the perception of falling wages, purchasing power, and actual standards of living, and almost everything to do with social comparison: the big gap between the top & bottom of the economic leaderboard and the ability to see others doing “better” than you via social media. It’s aspirational poison that people can feed on with debt, which is what you resort to when you “need” things you can’t afford. Yes, I’m oversimplifying. There is obviously a great deal of real deprivation and struggle in this country and the world, healthcare bankruptcies, crazy house prices, etc. Right now, I’m talking about the luxury purse economy.
It’s safe to say at this point that some tasks once reserved for humans will be done by some variety of machine. It’s happened before, has been happening continuously, and will happen more so in the future. It’s possible that human capital will simply shift to more human, social, squishy tasks. We certainly need more of it than we have now, and the economy could shift in that direction. More teachers, fewer bankers.
But, if we take the bullish AI view, there’s also a possibility that the haves vs have-nots will transition from the current comparison via purchasing power to comparison via the ability to construct a (visibly) meaningful life. The transition from a work-based identity to one where large portions of society may not need to work—at least not in the traditional sense—would be more psychologically jarring than the postwar consumer revolution.
We’ve learned how economic inequality leads to material arms races. The next inequality may be existential: between those who can find meaning without work and those who cannot.
If the postwar era was defined by the democratization of consumption, the AI era may be defined by the democratization of purposelessness—and the challenge of rediscovering meaning in a world where being economically useful is no longer the foundation of identity.
David Foster Wallace, talking about TV and the dawn of the internet during his 1996(!) book tour for Infinite Jest:
At a certain point we’re gonna have to build up some machinery, inside our guts, to help us deal with this. Because the technology is just gonna get better and better and better and better. And it’s gonna get easier and easier, and more and more convenient, and more and more pleasurable, to be alone with images on a screen, given to us by people who do not love us but want our money.
My gosh: how good and prescient is that?
Wallace admitted he hadn’t made the machinery to defend himself against the pull of passivity:
We’re either gonna have to put away childish things and discipline ourselves about how much time do I spend being passively entertained? And how much time do I spend doing stuff that actually isn’t all that much fun minute by minute, but that builds certain muscles in me as a grown-up and a human being? And if we don’t do that, then (a) as individuals, we’re gonna die, and (b) the culture’s gonna grind to a halt.
…Very soon there’s gonna be an economic niche opening up for gatekeepers…And we will beg for those things to be there. Because otherwise we’re gonna spend 95 percent of our time body-surfing through shit.
Dating even further back to his 1992 essay “E Unibus Pluram,” Wallace described America as increasingly an “atomized mass of self-conscious watchers and appearers” organized into “networks of strangers connected by self-interest and contest and image.” Wallace captured the hyperpolarized tribal era of social media decades before it occurred, like a sociopolitical prophet, much the way Neal Stephenson captured technology (e.g. Snow Crash). Wallace was writing about the growing epidemic of loneliness and depression and eventually committed suicide in 2008 at the age of 46. (This was also years before it became a popular cause, as exemplified by Surgeon General Vivek Murthy’s Together, published in 2020.)
We could make it easier to make good choices, sure. We could let our better natures define the guardrails for personal behavior and technology use to circumvent the algorithms. Maybe we’ll even have personal agents to fight against corporate agents in curating a meaningful life in the age of AI. The desire for gatekeepers is real.
But it seems to me that Wallace is probably still right: the hoped for machinery still ultimately has to be in the cultivation of self: not necessarily in the meditation/mindfulness mode or some other self-help flavor du jour, but rather in how Wallace describes freedom in his famous essay/commencement speech, “This is Water”:
The really important kind of freedom involves attention, and awareness, and discipline, and effort, and being able truly to care about other people and to sacrifice for them, over and over, in myriad petty little unsexy ways, every day.
