Modern Illustration is a delightful online archive of vintage (c.1950-1975) illustration curated by Zara Picken.
Via The Wealth Ladder by Nick Maggiulli:
As Robert Sapolsky wrote in Why Zebras Don’t Get Ulcers: When we sit around and worry about stressful things, we turn on the same physiological responses—but they are potentially a disaster when provoked chronically. A large body of evidence suggests that stress-related disease emerges, predominantly, out of the fact that we so often activate a physiological system that has evolved for responding to acute physical emergencies, but we turn it on for months on end, worrying about mortgages, relationships, and promotions. If you find yourself chronically worried in life, finding ways to destress will be paramount for keeping your mental wealth intact. Studies show that exercise, sleep, meditation, yoga, as well as various mindfulness techniques can help to reduce stress. Whatever you decide to do, finding what works for you is what’s important. Lastly, focusing on yourself and your self-esteem is another key component of mental wealth. And for many, self-esteem is built on how they perceive their social status. If you believe that you haven’t accomplished anything, then you will probably feel low status. But if you believe that what you do has purpose, then you will feel high status. More importantly, your perceived status does not need to be based on money or career accomplishments. You can find status and self-esteem in many areas of life. Once again from Robert Sapolsky: So, the lowly subordinate in the mailroom of the big corporation may, after hours, be deriving tremendous prestige and self-esteem from being the deacon of his church, or the captain of her weekend softball team, or may be the top of the class at adult-extension school. One person’s highly empowering dominance hierarchy may be a mere 9-to-5 irrelevancy to the person in the next cubicle.
You get to, as suggested by Calvin & Hobbes creator Bill Watterson, “recreate constructively” and construct life’s meaning. Unlike most games, the status game allows you to choose how to keep score.
Status is relative to the context in which it is being evaluated.
In some contexts, the aspirations of status can even backfire. The trappings of wealth, in the wrong setting, can function more as a cloak of shame. Being deliberate in your context (career, schools, neighborhood, etc) has an incredible impact on perceived status. As in, which Joneses are you trying to keep up with?
Thankfully, you get to choose which status game you want to play and how you want to evaluate yourself. This is both a blessing and a curse, because you can be objectively great at something and feel like a failure or you can be just okay at something and feel like a massive success. It’s all based on how you feel. It’s based on the story you tell yourself about yourself.
I am reminded of David Brooks’ distinction between resume virtues and eulogy virtues.
On living in a tumultuous, uncertain world, from the Free Press’ Abigail Shrier:
Overinvest, never underinvest, in people around you and in those you love. Particularly in a world with poor visibility, they are the closest thing any of us has to security. Give the community you inhabit a real shot and make a go of ensuring it succeeds.
If crime skyrockets, if you must leave the city, if our great civilization goes bust, you won’t regret the time you spent taking your kids and their friends out for ice cream. You’ll all carry with you memories that will bolster you wherever you go. Do all these things right up until the very moment you decide to make a change.
Here is the tough love you requested: You won’t regret having wholeheartedly invested in your home and community for as long as they are yours. I can almost guarantee, at the end of your journey, you won’t lament, “Civilization collapsed. I needed to flee. And meanwhile, I’d just bought this damn sofa.”
Paul Ford, writing in Wired about post-bubble normalized (i.e. boring) AI:
People often wonder how to get back to the vibes of the early, heady days of the internet. It’s easy: Crash the global economy and leave lots of young people with keyboards and spare time. Make it boring. That’s what’s interesting.
America after World War II entered an era of economic growth and intermittent societal cohesion. It also created the American consumer, an economic system based in large part on earning money to buy things, at first to change your life (a washing machine!) and then to signal to others (another new car?).
That simple instinct—to not fall behind—combined with easy credit and rising expectations, helped fuel the debt culture that still defines American life. Market capitalism naturally creates winners and losers. When growth slows or inequality widens, that sense of relative deprivation becomes a psychological trap. As in, success isn’t objective; it’s relative.
The Great Disgruntlement of the modern era has less to do with the perception of falling wages, purchasing power, and actual standards of living, and almost everything to do with social comparison: the big gap between the top & bottom of the economic leaderboard and the ability to see others doing “better” than you via social media. It’s aspirational poison that people can feed on with debt, which is what you resort to when you “need” things you can’t afford. Yes, I’m oversimplifying. There is obviously a great deal of real deprivation and struggle in this country and the world, healthcare bankruptcies, crazy house prices, etc. Right now, I’m talking about the luxury purse economy.
It’s safe to say at this point that some tasks once reserved for humans will be done by some variety of machine. It’s happened before, has been happening continuously, and will happen more so in the future. It’s possible that human capital will simply shift to more human, social, squishy tasks. We certainly need more of it than we have now, and the economy could shift in that direction. More teachers, fewer bankers.
But, if we take the bullish AI view, there’s also a possibility that the haves vs have-nots will transition from the current comparison via purchasing power to comparison via the ability to construct a (visibly) meaningful life. The transition from a work-based identity to one where large portions of society may not need to work—at least not in the traditional sense—would be more psychologically jarring than the postwar consumer revolution.
We’ve learned how economic inequality leads to material arms races. The next inequality may be existential: between those who can find meaning without work and those who cannot.
If the postwar era was defined by the democratization of consumption, the AI era may be defined by the democratization of purposelessness—and the challenge of rediscovering meaning in a world where being economically useful is no longer the foundation of identity.
David Foster Wallace, talking about TV and the dawn of the internet during his 1996(!) book tour for Infinite Jest:
At a certain point we’re gonna have to build up some machinery, inside our guts, to help us deal with this. Because the technology is just gonna get better and better and better and better. And it’s gonna get easier and easier, and more and more convenient, and more and more pleasurable, to be alone with images on a screen, given to us by people who do not love us but want our money.
My gosh: how good and prescient is that?
Wallace admitted he hadn’t made the machinery to defend himself against the pull of passivity:
We’re either gonna have to put away childish things and discipline ourselves about how much time do I spend being passively entertained? And how much time do I spend doing stuff that actually isn’t all that much fun minute by minute, but that builds certain muscles in me as a grown-up and a human being? And if we don’t do that, then (a) as individuals, we’re gonna die, and (b) the culture’s gonna grind to a halt.
…Very soon there’s gonna be an economic niche opening up for gatekeepers…And we will beg for those things to be there. Because otherwise we’re gonna spend 95 percent of our time body-surfing through shit.
Dating even further back to his 1992 essay “E Unibus Pluram,” Wallace described America as increasingly an “atomized mass of self-conscious watchers and appearers” organized into “networks of strangers connected by self-interest and contest and image.” Wallace captured the hyperpolarized tribal era of social media decades before it occurred, like a sociopolitical prophet, much the way Neal Stephenson captured technology (e.g. Snow Crash). Wallace was writing about the growing epidemic of loneliness and depression and eventually committed suicide in 2008 at the age of 46. (This was also years before it became a popular cause, as exemplified by Surgeon General Vivek Murthy’s Together, published in 2020.)
We could make it easier to make good choices, sure. We could let our better natures define the guardrails for personal behavior and technology use to circumvent the algorithms. Maybe we’ll even have personal agents to fight against corporate agents in curating a meaningful life in the age of AI. The desire for gatekeepers is real.
But it seems to me that Wallace is probably still right: the hoped for machinery still ultimately has to be in the cultivation of self: not necessarily in the meditation/mindfulness mode or some other self-help flavor du jour, but rather in how Wallace describes freedom in his famous essay/commencement speech, “This is Water”:
The really important kind of freedom involves attention, and awareness, and discipline, and effort, and being able truly to care about other people and to sacrifice for them, over and over, in myriad petty little unsexy ways, every day.
I recently experienced a trial of Tesla’s full self-driving capability for the first time. It was a decent, if bewildering and somewhat spooky experience. It’s unusual to see a computer do a task in the real world that you have been doing—and had to do yourself—for decades.
For some reason, feeling a computer manifesting its judgment and controlling a vehicle in three-dimensional space does feel a little bit different than seeing its capabilities in a chat box. In some ways, it is less impressive, and in others, much more.
Certainly, the current iteration—which requires you to maintain attention on the road—does not fulfill the promise of true self-driving in the sense that you are still, at least in part, the driver of the car and ultimately responsible for it, as opposed to being a front-seat passenger able to do other tasks. And that’s for good reason.
The reality is that, like with human drivers, the issues are always in the edge cases. The average person does not get in a crash after every left turn, and neither do self-driving cars. I was struck by how easy it is to trust, and how willing I was to let the car give it a go. The Tesla did struggle repeatedly in my experience in several locations. It may drive more miles without an accident than the average human driver, but the mistakes it makes currently are also ones that most people would not make:
It gets caught in turn-only lanes or the wrong turning lanes pretty consistently and doesn’t learn from those mistakes. It doesn’t, for example, move over nearly fast enough for a right-hand turn after exiting the highway if there is a long line of cars waiting. More dramatically, in downtown Dallas’s web of one-way streets, it tried to go down the wrong way in a bifurcation into oncoming traffic. It also ran a red light after trailing an 18-wheeler so close that it couldn’t see the traffic light.
That being said, what struck me most is how quickly humans trust—and how rapidly automation bias takes effect simply by virtue of the product being available. There is a serious sense of trust in its capabilities. You see it handle a couple of turns, stop appropriately, go appropriately, and check the blind spot before changing lanes, and suddenly, you believe it can do the real deal. You’re ready to trust it on the road in ways that should frankly be pretty surprising. You begin to resent its incessant demand that you pay attention.
In some post-AGI arguments, some people say that humans like other humans and really want other humans in the loop—but I’m actually not sure that’s the case. Especially not if the lack of humans in the loop means faster service, cheaper service, or more customizable service. Really, every situation could be different, and how you feel about your doctor or your pastor might be different than how you feel about yourself as a car or about cabbies.
But these sorts of shortcomings are probably a matter of time, and the downstream consequences of even just this narrow use are profound and unknowable. I don’t want to opine on American car culture or how cities might deal with fleets of self-driving cars waiting for passengers.
It does make you wonder how high the bar for AI adoption is for any particular task. If we can trust a car so easily, a matter of personal life or death, how fast will the shelf life of human expertise decay in other contexts?
In healthcare, for example, I have seen nothing autonomous that is sufficiently reliable yet, which is why adoption outside of personal use or low-hanging general LLM fruit—scribing, summaries/impressions, some clinical decision support/brainstorming, etc.—is relatively low other than narrow models flagging stuff like pulmonary emboli or evaluating a single finding like diabetic retinopathy or bone age. If, with broader tool adoption, we still need a human with hands on the wheel to prevent catastrophe in rare occurrences, then we will need to set up a system carefully to keep that human paying attention and not just going through the motions of looking like they are paying attention in order to placate a monitoring algorithm instead of actually focusing on the road.
The question, as always, becomes: is the purpose to be more efficient, or is the purpose to provide a better product? Those are, as always, in constant tension.
If you search for reviews or commentary on Tesla self-driving, the most common thing you’ll find is frustration that the car won’t let you stop paying attention. Autopilot is frustrating. Driving is fine. Being a passenger is fine. Being in between (other than in stop-and-go traffic) is kinda annoying.
This is likely a temporary problem, and while driving is also different from many people’s jobs, I think it does speak to an underlying human desire: people do not like feeling superfluous. I do not think as a species we will like being forced to rubber-stamp or clock in just for the pretense of being a human in the loop (i.e. drawing a paycheck as liability operator/sin-eater).
Real work is meaningful. Box-checking is soul-sucking, demoralizing, and breeds resentment.
Karl Ove Knausgaard, writing in Harper’s earlier this year:
It feels as if the whole world has been transformed into images of the world and has thus been drawn into the human realm, which now encompasses everything. There is no place, no thing, no person or phenomenon that I cannot obtain as image or information. One might think this adds substance to the world, since one knows more about it, not less, but the opposite is true: it empties the world, it becomes thinner. That’s because knowledge of the world and experience of the world are two fundamentally different things. While knowledge has no particular time or place and can be transmitted, experience is tied to a specific time and place and can never be repeated. For the same reason, it also can’t be predicted. Exactly those two dimensions – the unrepeatable and the unpredictable – are what technology abolishes. The feeling is one of loss of the world.
Knausgaard is best known for his voluminous autobiographical writing (his memoirs are in six[!] volumes). It’s perhaps not surprising then that this piece is less about the and more about his experience of technology. I don’t know about the prediction limitation (Asimov might disagree), but I still think he’s on to something.
One corollary to education is that while simulation is powerful and even necessary, the emotional valence of true experience is impossible to replicate entirely.
All the images I’ve seen of places I’ve never been, people I’ve never met create a kind of pseudomemory from a pseudoworld that I don’t participate in.
We are physical creatures living in a physical world with other physical creatures. The question is—as was perhaps most enjoyably explored in the nostalgia romp that is Ready Player One—how much is lost in the virtual world, how strong is the draw to the physical world, and how much of it do we need to live full lives?
Popular essays about AI published in our current media like to cycle between utopianism to massive dystopian automation/disruption to the “plea for collaboration.” The latter, from “A Better Way to Think About AI” by David Autor and James Manyika in The Atlantic:
In any given application, AI is going to automate or it’s going to collaborate, depending on how we design it and how someone chooses to use it. And the distinction matters because bad automation tools—machines that attempt but fail to fully automate a task—also make bad collaboration tools. They don’t merely fall short of their promise to replace human expertise at higher performance or lower cost, they interfere with human expertise, and sometimes undermine it.
This is the no man’s land that explains why articles about AI in medicine don’t reflect reality and why everyone I talk to thinks that all radiologists spend their days awash in useful AI.
Human expertise has a limited shelf life. When machines provide automation, human attention wanders and capabilities decay. This poses no problem if the automation works flawlessly or if its failure (perhaps due to something as mundane as a power outage) doesn’t create a real-time emergency requiring human intervention. But if human experts are the last fail-safe against catastrophic failure of an automated system—as is currently true in aviation—then we need to vigilantly ensure that humans attain and maintain expertise.
The permanent cousin of automation bias will be de-skilling. Pilots who can’t actually take the yoke and land planes anymore are de-skilled. If there is a gap between useful AI and magical super-human AI, then mitigating de-skilling and preventing never-skilling are critical components to any future workflow:
Research on people’s use of AI makes the downsides of this automation mindset ever more apparent. For example, while experts use chatbots as collaboration tools—riffing on ideas, clarifying intuitions—novices often treat them mistakenly as automation tools, oracles that speak from a bottomless well of knowledge. That becomes a problem when an AI chatbot confidently provides information that is misleading, speculative, or simply false. Because current AIs don’t understand what they don’t understand, those lacking the expertise to identify flawed reasoning and outright errors may be led astray.
The seduction of cognitive automation helps explain a worrying pattern: AI tools can boost the productivity of experts but may also actively mislead novices in expertise-heavy fields such as legal services. Novices struggle to spot inaccuracies and lack efficient methods for validating AI outputs. And methodically fact-checking every AI suggestion can negate any time savings.
Beyond the risk of errors, there is some early evidence that overreliance on AI can impede the development of critical thinking, or inhibit learning. Studies suggest a negative correlation between frequent AI use and critical-thinking skills, likely due to increased “cognitive offloading”—letting the AI do the thinking. In high-stakes environments, this tendency toward overreliance is particularly dangerous: Users may accept incorrect AI suggestions, especially if delivered with apparent confidence.
The rise of highly capable assistive AI tools also risks disrupting traditional pathways for expertise development when it’s still clearly needed now, and will be in the foreseeable future. When AI systems can perform tasks previously assigned to research assistants, surgical residents, and pilots, the opportunities for apprenticeship and learning-by-doing disappear. This threatens the future talent pipeline, as most occupations rely on experiential learning—like those radiology residents discussed above.
Learners will not truly learn if they don’t take on tasks independently.
From “Structured Procrastination” by John Perry, philosopher and author of The Art of Procrastination (delightfully subtitled: A Guide to Effective Dawdling, Lollygagging and Postponing):
The key idea is that procrastinating does not mean doing absolutely nothing. Procrastinators seldom do absolutely nothing; they do marginally useful things, like gardening or sharpening pencils or making a diagram of how they will reorganize their files when they get around to it. Why does the procrastinator do these things? Because they are a way of not doing something more important. If all the procrastinator had left to do was to sharpen some pencils, no force on earth could get him do it. However, the procrastinator can be motivated to do difficult, timely and important tasks, as long as these tasks are a way of not doing something more important.
Structured procrastination means shaping the structure of the tasks one has to do in a way that exploits this fact. The list of tasks one has in mind will be ordered by importance. Tasks that seem most urgent and important are on top. But there are also worthwhile tasks to perform lower down on the list. Doing these tasks becomes a way of not doing the things higher up on the list. With this sort of appropriate task structure, the procrastinator becomes a useful citizen. Indeed, the procrastinator can even acquire, as I have, a reputation for getting a lot done.
This is the story of my life. Everything I’ve written stems from this reality. This blog started in 2009 to escape studying as a medical student. I even started writing and publishing Twitter fiction (of all things!) to further fit something creative in the cracks of my day.
For most of my life, the easiest way to get anything done was to have an overly full plate and just juggle interesting distractions with the sheer terror of missing deadlines.