Skip to the content

Ben White

  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • #
  • #
  • #
  • #
  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • Search
  • #
  • #
  • #
  • #

The Intern Pivot

03.20.26 // Medicine

A reader asked me last year whether they should pursue applying to radiology as an intern because they felt like, in the end, clinical medicine wasn’t for them.

  • I have no idea. Maybe?
  • But also: Internships are hard, and grass is green.
  • Everything has good and bad parts.
  • It’s easier to blame the current context than acknowledge our own roles and attitudes in grading a subjective experience. As in, it doesn’t always have to feel this way.
  • Coffee and vibes are great, but easy to overweigh from a brief trip on rounds.
  • Whatever you do, every field is better with a craftsman’s mentality.

If Match Day results weren’t what you wanted, be aware: the mind is powerful. You will adjust, you will be happy, and you can find meaning in Plan B—hopefully so much so that, years in the future, you will look on this as the universe giving you what you needed over what you wanted.

Paying Surveys for Doctors

03.19.26 // Medicine

Medical surveys are an easy way to make a few bucks at a good hourly rate (well, maybe at least for a resident), and there are multiple sites offering surveys to physicians. The caveat is that, of course, most survey sponsors are typically looking for board-certified physicians with multiple years of experience, particularly in sub-specialties. The less experience you have, the more you need to be prepared to get screened out of what seem like promising survey opportunities.

This article was originally posted way back on Feb 26, 2014 and last updated March 2026. This page contains referral/affiliate links (thank you for your support).

ENOS is a new healthcare panel with a novel premise:  members are paid instantly via Venmo, Paypal, or paper check—no delays or redemption thresholds. ENOS always pays for your time: Even if you’re ineligible for a survey after completing screener questions, they still send you $5. Readers get a $25 sign-up bonus.

ZoomRx is also excellent and has a nice app and better/shorter-than-average surveys. $25 sign-up bonus for the following specialties: Hematology/Oncology, Cardiology, Neurology, Gastroenterology, Psychiatry, Nephrology, Rheumatology, Allergy/Immunology, Pulmonology, Dermatology, Urology, Endocrinology, Surgery. They even pay for attempts when you screen out.

One of the biggest survey sites is Sermo (also an online healthcare community), which is now offering my readers a $10 welcome bonus. The survey experience has been recently revamped, and once you maintain a balance of $100 in honoraria, you get preferentially invited to more surveys.

One of my very favorites is InCrowd, which has a slick mobile-friendly site and will send you survey opportunities by email or text message. These are always of the very short and painless variety (the fastest of all in my experience), so the payouts are small, but it’s good money for the time and basically effortless. You do have to respond quickly before surveys fill up, but you even get a buck when you get screened out. Being referred (like signing up through that link) will earn you a $10 bonus after you answer your first two microsurveys.

M3 now has three separate very active research companies under its umbrella: M3 Global Research, M-Panels, and All Global Circle. You can earn $25 for joining one panel, $40 for two, and $60 for joining all three (for the following specialties:  Hematology-Oncology, Neurology, Gastroenterology, Nephrology, Cardiology, Urology, Surgery, Rheumatology, Obstetrics and Gynecology, Pulmonology, Allergy and Immunology, Family Medicine, Psychiatry, Dermatology, Ophthalmology, Endocrinology/Diabetes, and Pediatrics).

Curizon has been in the business a long time, but they recently completely revamped their website and platform. It’s a trusted site for well-paying healthcare surveys for physicians as well as other healthcare professionals. Every new registration is entered in a monthly drawing for $100.

At the resident level, one of my old favorites has been Brand Institute, which almost exclusively sends out short surveys about potential drug brand names. Payouts are always on the smaller side ($15), but each one is quick (about $1 per minute or more) and screen-outs are rare. So if you get invited to a survey, then you can generally complete it and get the honoraria. No BS. The main style/format is nearly always the same, so you pick up speed as you do more of them. And that honoraria size is also significantly larger than what one can generally pull as a non-physician (e.g. SurveySavvy, the biggest most popular survey site around, usually pays a measly $2 per survey). The website, however, is clunky and terrible. You’ve been warned.

Additional legitimate additional survey sites, many of which are significantly less active, are below:

  • ImpactNetwork
  • Reckner Healthcare
  • OpinionSite
  • MDforLives is a newer company that I cannot recommend at this time.
  • Olson Research Group
  • CurbsideMe (now defunct)
  • Epocrates Honors
  • DoctorDirectory
  • MedSurvey
  • Advanced Medical Reviews
  • Physicians Round Table
  • Truth on Call (text-message based surveys; not sure this is meaningfully active anymore)
  • MedQuery
  • Medical Advisory Board
  • SurveyRx
  • Physicians Advisory Council
  • Health Strategies Group
  • InspiredOpinions (Schlesinger Associates)
  • Medefield
  • Encuity Research
  • e-Rewards Medical
  • Physicians Interactive

Information vs Education

03.17.26 // Medicine

Preparing for standardized tests by yourself using high-quality resources is both effective and a little bit soulless. There’s a reason why much of medical education could be streamlined in both time and cost to what amounts to an old-time correspondence course—that’s because it’s long been sold as an information problem, and the core question for schools has been how to best transmit the holy information to the student.

This was probably historically at least partially true, but in the 21st century, this conception misses the point: information is no longer the core skill that needs cultivation. It is social intelligence, human skills, and the bringing of information to bear for problem-solving. It’s critical thinking, and doing it while working with people. And yes, the information is important (it really, really is!)—but the information is not where very expensive medical schools really shine. With the advent of better qBanks, Anki decks, and commercial lecture products, we are increasingly choosing a factory schooling model over one that prioritizes a social experience of working with caring teachers, motivated peers that are important to you, and patients with meaningful tasks to provide motivation and centering.

The schools and students spend lots of money on pieces of paper and question banks and other forms of curated video content, and these are all wonderful things. My point is not to suggest that using better lectures and better questions is a bad thing for education—it’s certainly not. It’s that those things do not preclude the need for the other part—not in some token problem-based learning format or anything so prescriptive—but real, meaningful, non-tedious, in-person work.

Whatever it once was, it is no longer mainly an information problem. As the cliche goes, you can lead a horse to water, but you can’t make it drink. We need to encourage people to be motivated (or find motivated people), and we need to help curate and support sustaining that curiosity—that people can do and internalize the need to continue doing that type-2 fun of hard work. The fact that students would rather be at home studying instead of working in the hospital shows the fruits of our system: the triviality of many students’ clinical experiences (the clerkship as performance art) combined with the pressure of Shelf exams as the defining feature of their grades.

The world is full of heavy objects, and yet most people are not ripped. We are the limiting factor, and we need systems to help us and support us to be our best selves. Burnout is a system that snuffs out our soul’s flickering flame.

There was a time in my radiology training where people were encouraged to incorporate multiple-choice questions with audience response into their lectures instead of taking hot-seat cases where people were put on the spot. That is more comfortable, to be sure, but I can tell you the fear and anxiety of wanting to perform for your colleagues and mentors was much more inspiring. Not every time you’re wrong in public is unfair pimping/humiliation.

An ability to take those hits on the chin when you flub a case is also important. Psychological safety doesn’t mean never being challenged—it means being supported enough that you can bounce back from your inevitable failures. We have forgotten how important resiliency is, and we’ve allowed undergraduate medical education to remain dominated by the factory information paradigm while neutering the chance for more students to become respected members of the care team more of the time.

Staying Small

03.16.26 // Medicine

I would posit that good healthcare is more analogous to a restaurant than most large corporations. From Michael E. Porter’s Competitive Strategy: Techniques for Analyzing Industries and Competitors:

If close local control and supervision of operations is essential to success the small firm may have an edge. In some industries, particularly services like nightclubs and eating places, an intense amount of close, personal supervision seems to be required. Absentee management works less effectively in such businesses, as a general rule, than an owner-manager who maintains close control over a relatively small operation. Smaller firms are often more efficient where personal service is the key to the business. The quality of personal service and the customer’s perception that individualized, responsive service is being provided often seem to decline with the size of the firm once a threshold is reached. This factor seems to lead to fragmentation in such industries as beauty care and consulting.

Healthcare has seen those fragmentation factors dissolve since the 1990s and especially since the ACA.

In Redefining Health Care: Creating Value-based Competition on Results, Porter then argues:

Competition has taken place at the wrong levels, and on the wrong things. It has gravitated to a zero-sum competition, in which the gains of one system participant come at the expense of others. Participants compete to shift costs to one another, accumulate bargaining power, and limit services. This kind of competition does not create value for patients, but erodes quality, fosters inefficiency, creates excess capacity, and drives up administrative costs, among other nefarious effects.

Over the years since I returned to the area, the local university medical center has progressively moved away from individualized service into a predictably depressing corporate marketshare grab. If this is “The Future of Medicine Today,” the future is bleak.

If policymakers want to improve US healthcare, the easiest lever to pull first is to enable physician ownership and make it feasible to stay small without needing to opt out of the system entirely by going direct pay.

A Problem With Doctors’ Software

02.26.26 // Medicine, Miscellany

Daniel Cook on Mixing Games and Applications:

Why do games have such a radically different learning curve than advanced applications? It turns out that games are carefully tuned machines that hack into human beings’ most fundamental learning processes. Games are exercises in applied psychology at a level far more nuanced than your typical application. Implicit in this description of interactivity is the fact that users change. More importantly, the feedback loops we, as designers, build into our games, directly change the user’s mind… The person that starts using a game is not the same person that finishes the game. Games and the scaffold of skills atoms describes in minute detail how and what change occurs.

This is a pretty big philosophical shift from how application design is usually approached. We tend to imagine that users are static creatures who live an independent and unchanging existence outside of our applications. We merely need to give them a static set of pragmatic tools and all will be good. Games state that our job is to teach, educate and change our users. We lead them on an explicitly designed journey that leaves them with functioning skills that they could not have imagined before they started using our application. Our games start off simple and slowly add complexity. Our apps must adapt along the user’s journey to reflect their changing mental models and advanced skills. Failure to do so results in a mismatch that results in frustration, boredom and burnout.

Interesting applications to enterprise software including EHRs, PACS, etc, where there is a distinct lack of gamification. This dovetails nicely with Jason Fried’s point about software design between making features obvious, easy, and possible.

Task Spreading

02.03.26 // Medicine, Radiology

In many practices and especially in academia, important but burdensome tasks are either treated like hot potatoes or flow downhill to the most junior faculty. There are several strategies for distributing important but non-measurable or non-promotable tasks:

  • random assignment
  • rotating schedules
  • clear benefits like compensatory time off
  • automatic cycling/off periods (e.g., do a task for one year, then off for four years)

Avoid asking for volunteers in some situations, particularly for tasks that have little upward mobility, little chance to shine, and where performance is perhaps less important or easy to measure. Otherwise, you get the same people-pleasers doing everything and burning out. Even if they’re not burning out, their time is taken up (perhaps by doing the wrong activities as acts of service).

Alternatively, if a task is important enough to get done, consider ways in which it can be part of someone’s job description. This may be more practical when something undesirable is paired with something desirable, like remote schedules with a more challenging case mix.

Another possibility is putting your money where your mouth is. My practice has a small component of bonus compensation tied to accumulating enough brownie points for doing various tasks like tumor boards, recruiting activities, etc. The reality in our group is that we are large enough and the tasks varied enough that the minimum threshold to qualify is laughably small, and there aren’t measurable benefits to accumulating more than you need. It ensures a floor, probably relying more on the principle loss aversion than the actual money at stake.

You can give people effort RVUs for doing those tasks, but in this market, anything that gives you credit towards a full day’s work that detracts from reading cases can also contribute to overall staffing problems. In other clinical fields, it just pushes more work into the evening/weekends (and eats up precious academic/admin time if in the mix). In small groups where such activities may be distributed evenly, nothing may need to happen. In larger practices where most members will do nothing except interpret scans, stipends are an option, particularly for larger tasks like division directorship.

Bean counting is fine to an extent, and some activities like multidisciplinary conferences are easy to count. But many important tasks are challenging to quantify and may even backfire when attempted. The classic daycare fine experiment added a financial penalty when parents arrived late to pick up their infants, which replaced a powerful social norm with a market norm (swapping guilt for a few dollars). Sometimes, something people would do for free is devalued when a sticker price is put on it that is laughably small.

Consider preventing excess accumulation/consolidation: if someone is going to do something more, the default should be for that person to do something less as well. It’s not just for their benefit. Our institutions are fragile when they rely on individuals too much for any reason.

(On a related note, documentation is key for onboarding and transferring. Truly useful internal documents that are up to date are exceedingly rare.)

Distinguish between areas where someone is specifically important—where the specific representation matters—versus when someone is a warm body. Not all extra work is the same, and not all additional roles are interchangeable or easy to replace. The ones that leave a real hole need to be valued more, even if that comes mostly in the form of consistent gratitude.

If you think someone special knows just how much they are respected and appreciated, they don’t. I promise.

Backseat Epic

01.30.26 // Medicine, Miscellany

Mike Swanson’s “Backseat Software” perfectly describes one of the many modern software problems that also happen to plague the Epic EHR:

So the problem isn’t that software ever teaches, asks, or informs. The problem is that once a company builds the machinery to do it, that machinery becomes cheap to reuse, and the incentives gradually pull it away from “help the user succeed” toward “move the metric.”

What starts as an occasional heads-up becomes a permanent layer of UI exhaust.

And then:

So why does it keep happening? Because inside companies, the incentives are clear and the measurements are easy. You can measure clicks and track whether they led to a “completion.” You can measure whether a nudge led to the next step in the funnel.

You cannot easily measure the resentment. Or the rage clicks when they smash a button to dismiss another “did you know” pop-up. You cannot easily chart the moment a user thinks, “I used to like this product, and now it feels needy.” You cannot easily quantify the slow erosion of trust.

I have been dismissing the same feature tour prompt at every login for 5 years.

The Mediocrity of Scale

01.29.26 // Medicine

Different doesn’t mean right or wrong. Best practices are sometimes best, and sometimes they’re just a mediocre consensus that a small group agrees to.

Institutions are so often large and stupid because herds of humans are difficult to manage at scale, and the lessons learned by individuals often don’t become institutionalized without significant baggage in the form of procedural debt. So many individual actions and the proliferation of useless bureaucracy can be ascribed to the pursuit of plausible deniability. The fewer things that can be pinned on us, the safer our jobs. No one is fired for being rational, even if the outcome is suboptimal.

It’s not the goals but rather the incentives created by those goals that shape outcomes. It’s the BS that sounds good and reasonable in isolation versus what works in practice. The most important tactic for achieving any strategy is removing the friction, roadblocks, and unintended negative consequences of doing the actual desired behavior.

There is no broadly acceptable present value function to gauge political acts the way we can estimate the effects of economic decisions in terms of monetary value. The time horizon of politics, social media, and fame are all much shorter than the downstream consequences of most decisions, which incentivizes leaders of all stripes to do stupid shit that looks good now—even if it is, on the whole, harmful.

This is a problem because we often conflate “rational” behavior with optimization of easily measurable outcomes.

The creation of red tape is understandable, but the incremental costs are almost never considered and are challenging to undo, as every policy is obviously there for a reason—even if, on balance, that reason does not justify the cost. Nothing is easier than thinking of more things we can do to make something better or safer. What is harder is asking the tough question: at what cost? What are we willing to forgo in order to achieve this addition? Intentions matter, but the consequences matter more.

Bigger often means more inbound data and more layers of management to deal with it. Size isn’t always good, and growth shouldn’t always be the goal. Yes, size can help you fight a bully, and it can also help you be a bully. And there are efficiencies of scale when it comes to both high fixed costs as well as volume-based discounts on costs like IT. But we would be foolish to only look at those measurable costs and lose sight of the downsides: increasing bureaucracy, weak culture, lack of accountability, and a diluted mission.

Step into a large hospital and tell me if you think everybody feels like they’re on the same team. The fragmented care in your average mega-hospital is so often removed from a well-oiled machine. These organizations are essentially too large to have a mission or a really strong, behavior-shaping positive culture. Decision makers and rank-and-file are often separated by helplessness-inducing layers of human flotsam.

The mission of a large healthcare organization is not actully patient care. Healthcare is the industry, and patients are the vehicle for the business model. They are the sine qua non of healthcare profit. But the proliferation of bureaucrats and policies because of the need to “manage people” tells us that it’s not working.

If people were really working together, we wouldn’t need so much bloat.

The Unhelpfulness of AI Predictions

01.26.26 // Medicine, Radiology

Most Content is Boring

Most current articles about AI and radiology work don’t help us much with regard to the questions people are constantly asking about predicting the future. Publish or perish is fine, but I’ll admit I also don’t really care how many ways we can say these early tools are sorta maybe helpful sometimes for some people but are also super brittle and unreliable and also the most important thing is that they don’t make things worse, particularly due to bad integration with a legacy workflows. How to incorporate something only borderline useful into a workflow you probably don’t control just isn’t very interesting, and people keep dropping predictions about what things will look like years (or even decades!) in the future based on what we’ve seen so far.

Extrapolating linear growth when we have exponential compute isn’t helpful. But imagining a post-work techno-utopia vs dystopia is also not actionable for anyone. We can skip the hand-wringing and just agree that everything currently available isn’t good enough to change the world but that maybe one day it will be.

We live in a complicated world where many of what we perceive as challenging problems are computationally straightforward (e.g. cumbersome long calculations, playing chess) and other things that can become automatic to most humans are computationally intense (as it turns out, driving a car, though it’s getting better!). Corner cases and uncertainty are, in some sense, the whole game when it comes to automation. Medicine—with its treatment algorithms designed for squishy humans with limited narrative skills and inherently incomplete information—is a combination of both.

Lots of medicine is straightforward and subject to guild protectionism, but some of it is genuinely hard. And sometimes the difference between those two situations is only knowable after the fact. If nothing else, ambiguity is hard, and how much uncertainty and risk to accept has never been codified. For example, many current AI tools are designed for a high negative predictive value and are therefore overly sensitive. That’s fine for a tool to help a human not make a mistake, but that still can lead to cry-wolf problems and certainly wouldn’t work for an autonomous machine.

Balancing uncertainty and service expectations is no joke for humans or machines or the combination.

Even Good Predictions about Important Things Are Challenging to Act On

As I’ve said before, I don’t think the reality of impending change means that we can meaningfully predict the contours of that change or should even try to plan for its eventuality in ways that would be destabilizing in the intervening time. What Hinton got wrong is that predictions about replacement are dangerous because, until replacement actually occurs and/or demand shifts, we need to maintain the status quo. There’s no way to know if any field (whether primary care or even surgery) will be augmented or replaced over an effectively infinite timeline, or if the contours of the work will change and if so to what extent and how fast, and therefore what the correct number of people we need long-term such that we could preemptively adjust the pipeline of training to help us land at the optimal amount of people for every kind of job. It’s not going to happen.

For long training pathways like most physician careers, trying to pretend we can is insanity. Central planning in that fashion almost never works because there are far too many unknowns. What we need to do is continue training people for the world we live in with a plan, desire, and ability to adjust our workforce over time as the world changes. We need less planning and more flexibility. Less preciousness about how we work and more orientation on the desired outcomes.

Until the exact day a computer replaces a human, you need that human. Trying to predict a societal need years in the future is a fool’s errand. We haven’t been able to fill the halls of medicine  “correctly” over the past century; why would we be better at doing it now?

The Breast Scenario

Let’s just posit a very narrow scenario to see how futile the situation is. Let’s magically say AI for screening mammography becomes incorporated into major vendors at some point in the next X years, allowing for automated reports, even for callbacks. A computer report and a human report would be indistinguishable, especially since BI-RADS facilitates complete automation of reporting language and organization. (I’m also picking breast imaging for this thought experiment because it’s currently very much in demand, tends to be narrowly subspecialized, and has relatively unique current job offerings.)

Let’s also say that the algorithm performance and regulations are such that, for now, those exams are still read by a human being as well—at least in the short term. Guess what the incremental efficiency gain will be? How many screeners can someone read in that setting? The answer, of course, is that it varies.

Automation bias is going to hit fast, so it’s not clear if someone will be able to read 10-20% more, twice as many, three times as many, or some other multiple. Whatever the number is, the first question becomes: how does that affect breast imaging staffing? Does it change the premium that breast imagers can currently command because of the high volume of high-paying screening exams, particularly those with 3D tomosynthesis? How many nefarious rubber-stampers do we need to change the competitive landscape? Think of how fast turnaround times can be! This of course will happen on the backdrop of continually increasing imaging volumes and technological development.

Okay, then what happens to mammography reimbursement? The RVUs will inevitably change if the effort to do the work changes. How fast will that happen, and how drastic will the change be? The RVU system is a zero-sum game, and undoubtedly, there will be a lot of fighting about any changes to radiology reimbursement that occur thanks to AI tools—but they will occur. Tomosythesis is well-paid and already in crosshairs.

How would the subspecialty handle a shifting balance of non-procedural and procedural work, or remote vs on-site work?

Then, how would the job market respond? Would we see similar jobs with less pay? Or would we see a chilling of the growing lifestyle niche of 4-day workweeks with no evenings or weekends? Would we just be able to better staff growing volumes? When I finished training less than a decade ago, it was common for breast imagers to take general call. It would seem that’s pretty rare now. Medicine, like politics, is local. Of course, breast imaging isn’t operating in a vacuum; this is all taking place within an even larger lifestyle teleradiology trend. But variables don’t play nice and isolate themselves: everything is always changing all the time.

What is the timescale for those changes, and how piecemeal would they be? What are the second and third-order downstream consequences of even a single shift in radiology productivity? What about fully automated DEXA? What about prepopulated ultrasound measurements? Natural language draft reports for other modalities. We don’t even know. But the answer isn’t simply that we need fewer breast imagers when there are currently not enough, just because one day we might.

Medico-legally, how will we as a society handle mistakes made by machines? Or, with a human checker, how will we weigh a change made by a human versus a mistake made by a human coupled with a machine?

How will we feel about a mistake where the AI misses something and the human doesn’t catch it–or doesn’t fix it–versus a situation where the AI calls something and the human pulls it back, even when the AI is right? How will we handle type 1 versus type 2 errors? Will companies self-insure for autonomous work as the dominant strategy, or will we see protectionist legislation about a variety of key human tasks?

We don’t know, long-term, whether there are edge cases where humans are more or less important. But we also don’t know, even short-term, where the contours of those skills will reside. Maybe we’re entering a world where the ED would be a better place if a computer did most of the interviewing and initial workup for stable patients.

We are amidst a jagged frontier where AI is good at some things and bad at others, and we know there is a lag between capacity, implementation, legislation, and regulation. Planning is always a challenge; useful prediction is basically impossible.

So, instead of predicting exactly what will happen, we need to acknowledge the broad potential possibilities that could occur and plan a system to deal with them when/if they do occur, as opposed to trying to nail the blind landing.

We All Just Do Stuff Anyway

I don’t want to call out any behavior as irrational, but we should acknowledge upfront the human tendency to just do whatever we want to do and then create post hoc rationales + explanations + narratives to justify and explain our behavior after the fact. Much of this is unconscious and may be a clever trick of our mushy brain psychology, but that doesn’t make it any less real. Try to be honest with yourself and see why you feel the way you do about just about anything and if the reasons you give yourself actually mesh with reality. Write it down, or you’ll cheat (it’s our nature).

It’s like a medical student saying they like “technology” and “problem solving” and “people coming to the reading room for answers” for why they picked radiology when it was really that it seemed relaxing and quiet with good vibes and plenty of coffee and that people at least respected the radiologists’ intelligence and oh yeah they heard they were paid well.

Over the past decade or so, psychiatry has gone from very uncompetitive to pretty competitive. Is it because mental health is important and less stigmatized? Surely that’s part of it. But maybe also because cash-based private practice in a high-demand field (potentially even done via telehealth) sounds like a nice quality of life and a rare way to practice medicine on your own terms. We always seek single explanations thanks to our bias for narrative, but that doesn’t mean that the compelling storytelling is any more reflective of reality.

The examples of failed central planning and the reliance on narrative are legion.

To reiterate my point: No single person or entity has the answer.

So—

Yes, we need all varieties of doctor now, and we need to keep training all of them until we don’t—if that ever occurs.

Will a job like radiology or even primary care pay the same 10 years from now as it does today, or 20 years from now? That will depend on how much value the human provides within the system, how much work they do, how hard or complex that work is, and the supply/demand balance of the available workforce. Volatility and all kinds of non-AI factors already lead to lots of unpredictability.

There may be golden years of excess income for scalable jobs like radiology as these changes first take place. If senior engineers can use AI to do more projects without hiring as many junior coders, it results in rising incomes for those on top and a rug-pull for those entering the workforce.

Both medicine and society need ongoing, never-ending conversations in order to work on creating adaptable systems. Instead of placing bets, the acronym organizations need to set stakeholders up for success across the full spectrum from an AI-nothingburger to AI-work-is-optional.

Number Games

01.22.26 // Medicine, Miscellany

More from Alchemy: The Dark Art and Curious Science of Creating Magic in Brands, Business, and Life by Rory Sutherland (Part 1 is here):

When every function of a business is looked at from the same narrow economic standpoint, the same game is applied endlessly. Define something narrowly, automate or streamline it – or remove it entirely – then regard the savings as profit. Is this, too, explained by argumentative thinking, where we would rather win an argument than be right?

Especially egregious in large institutions engaging in budgetary arbitrage. A hospital where I work “saved” money a few years ago by outsourcing a lot of its IT abroad. I guarantee they didn’t actually save money for the organization despite spending less on the IT line item.

Today, the principal activity of any publicly held company is rarely the creation of products to satisfy a market need. Management attention is instead largely directed towards the invention of plausible-sounding efficiency narratives to satisfy financial analysts, many of whom know nothing about the businesses they claim to analyse, beyond what they can read on a spreadsheet. There is no need to prove that your cost-saving works empirically, as long as it is consistent with standard economic theory. It is a simple principle of business that, however badly your decision turns out, you will never be fired for following economics, even though its predictive value lies somewhere between water divining and palmistry.

[…]

The problems occur when people try to solve ‘wide’ problems using ‘narrow’ thinking. Keynes once said, ‘It is better to be vaguely right than precisely wrong’, and evolution seems to be on his side. The risk with the growing use of cheap computational power is that it encourages us to take a simple, mathematically expressible part of a complicated question, solve it to a high degree of mathematical precision, and assume we have solved the whole problem.

There’s a line from Tony Fadell’s Build that I love: “Data can’t solve an opinion-based problem.” So many meetings and dashboards, and so much of that data is used as a proxy to somehow “answer” the real question: Is this all really working? Can we do this better? Not just the one part that’s easy to measure—but the whole thing?

In the event, the platform has improved since we adopted it, but the fact that a cost-saving decision could be made without any consideration of the hidden risks to efficiency was nonetheless alarming. Why are large commercial organisations adopting this ideological approach to business? That was supposed to be the weakness of communism.

At least in many healthcare and other large organizations, it’s departmental siloing and abstracting both strategy and tactical decision making to layers of managers with little or out-of-date domain expertise. I have seen even good initial decisions with involved stakeholders ruined by implementation decisions by suits.

People’s motivations are not always well-aligned with the interests of a business: the best decision to make is to pursue rational self-justification, not profit. No one was ever fired for pretending economics was true.

Moral hazard is potent, especially when combined with the Peter Principle.

We fetishise precise numerical answers because they make us look scientific – and we crave the illusion of certainty. But the real genius of humanity lies in being vaguely right – the reason that we do not follow the assumptions of economists about what is rational behaviour is not necessarily because we are stupid. It may be because part of our brain has evolved to ignore the map, or to replace the initial question with another one – not so much to find a right answer as to avoid a disastrously wrong one.

It’s comfortable but wrong to wave away unavoidable uncertainty with a blanket of data.

[Herbert] Simon used satisficing to explain the behaviour of decision makers under circumstances in which an optimal solution cannot be determined. He maintained that many natural problems are characterised by computational intractability or a lack of information, both of which preclude the use of mathematical optimisation procedures.

[…]

Consequently, as he observed in his speech on winning the 1978 Nobel Prize, “decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world.

I’ve included Simon’s work on “bounded rationality” in a bunch of my talks, but surprisingly not much on this site. Simon’s work set the stage for much of behavioral economics and modern decision science. It is of both incredible personal and professional value.

Older