C.S. Lewis (of Narnia fame) on peer learning:

It often happens that two schoolboys can solve difficulties in their work for one another better than the master can. The fellow-pupil can help more than the master because he knows less. The difficulty we want him to explain is one he has recently met. The expert met it so long ago he has forgotten.

I’ve always been a big proponent of peer teaching and peer mentoring in medicine. I also often wonder if I’m getting worse at teaching the basics as I get older.

// 01.15.24

From “How to Do Great Work” by Paul Graham:

Schools also give you a misleading impression of what work is like. In school they tell you what the problems are, and they’re almost always soluble using no more than you’ve been taught so far. In real life you have to figure out what the problems are, and you often don’t know if they’re soluble at all.

Schools sometimes also give students the misleading impression that learning is not fun for its own sake and that writing should be boring.

// 01.04.24

From “The Bitter Lesson” by Rich Sutton:

In speech recognition, there was an early competition, sponsored by DARPA, in the 1970s. Entrants included a host of special methods that took advantage of human knowledge—knowledge of words, of phonemes, of the human vocal tract, etc. On the other side were newer methods that were more statistical in nature and did much more computation, based on hidden Markov models (HMMs). Again, the statistical methods won out over the human-knowledge-based methods. This led to a major change in all of natural language processing, gradually over decades, where statistics and computation came to dominate the field. The recent rise of deep learning in speech recognition is the most recent step in this consistent direction. Deep learning methods rely even less on human knowledge, and use even more computation, together with learning on huge training sets, to produce dramatically better speech recognition systems. As in the games, researchers always tried to make systems that worked the way the researchers thought their own minds worked—they tried to put that knowledge in their systems—but it proved ultimately counterproductive, and a colossal waste of researcher’s time, when, through Moore’s law, massive computation became available and a means was found to put it to good use.

[…]

We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

// 01.03.24

From the short essay, “Energy Makes Time,” by Mandy Brown:

But there’s something else I want to suggest here, and it’s to stop thinking about time entirely. Or, at least, to stop thinking about time as something consistent. We all know that time can be stretchy or compressed—we’ve experienced hours that plodded along interminably and those that whisked by in a few breaths. We’ve had days in which we got so much done we surprised ourselves and days where we got into a staring contest with the to-do list and the to-do list didn’t blink. And we’ve also had days that left us puddled on the floor and days that left us pumped up, practically leaping out of our chairs. What differentiates these experiences isn’t the number of hours in the day but the energy we get from the work. Energy makes time.

The what is sometimes even more important than the how much.

// 09.04.23

Humans–with some incredible diligence and lots of practice–can do such fascinating things.

Pretty unreal.

// 08.23.23