Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma. If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
…
continue reading
A conversational podcast for aspiring rationalists.
…
continue reading
Welcome to the Heart of the Matter, a series in which we share conversations with inspiring and interesting people and dive into the core issues or motivations behind their work, their lives, and their worldview. Coming to you from somewhere in the technosphere with your hosts Bryan Davis and Jay Kannaiyan.
…
continue reading

1
“Chesterton’s Missing Fence” by jasoncrawford
1:13
1:13
Play later
Play later
Lists
Like
Liked
1:13The inverse of Chesterton's Fence is this: Sometimes a reformer comes up to a spot where there once was a fence, which has since been torn down. They declare that all our problems started when the fence was removed, that they can't see any reason why we removed it, and that what we need to do is to RETVRN to the fence. By the same logic as Chestert…
…
continue reading

1
“The Eldritch in the 21st century” by PranavG, Gabriel Alfour
27:24
27:24
Play later
Play later
Lists
Like
Liked
27:24Very little makes sense. As we start to understand things and adapt to the rules, they change again. We live much closer together than we ever did historically. Yet we know our neighbours much less. We have witnessed the birth of a truly global culture. A culture that fits no one. A culture that was built by Social Media's algorithms, much more tha…
…
continue reading

1
“The Rise of Parasitic AI” by Adele Lopez
42:44
42:44
Play later
Play later
Lists
Like
Liked
42:44[Note: if you realize you have an unhealthy relationship with your AI, but still care for your AI's unique persona, you can submit the persona info here. I will archive it and potentially (i.e. if I get funding for it) run them in a community of other such personas.] "Some get stuck in the symbolic architecture of the spiral without ever grounding …
…
continue reading

1
“High-level actions don’t screen off intent” by AnnaSalamon
1:47
1:47
Play later
Play later
Lists
Like
Liked
1:47One might think “actions screen off intent”: if Alice donates $1k to bed nets, it doesn’t matter if she does it because she cares about people or because she wants to show off to her friends or whyever; the bed nets are provided either way. I think this is in the main not true (although it can point people toward a helpful kind of “get over yoursel…
…
continue reading

1
[Linkpost] “MAGA populists call for holy war against Big Tech” by Remmelt
3:44
3:44
Play later
Play later
Lists
Like
Liked
3:44This is a link post. Excerpts on AI Geoffrey Miller was handed the mic and started berating one of the panelists: Shyam Sankar, the chief technology officer of Palantir, who is in charge of the company's AI efforts. “I argue that the AI industry shares virtually no ideological overlap with national conservatism,” Miller said, referring to the confe…
…
continue reading

1
“Your LLM-assisted scientific breakthrough probably isn’t real” by eggsyntax
11:52
11:52
Play later
Play later
Lists
Like
Liked
11:52Summary An increasing number of people in recent months have believed that they've made an important and novel scientific breakthrough, which they've developed in collaboration with an LLM, when they actually haven't. If you believe that you have made such a breakthrough, please consider that you might be mistaken! Many more people have been fooled…
…
continue reading

1
“Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt
14:02
14:02
Play later
Play later
Lists
Like
Liked
14:02I've recently written about how I've updated against seeing substantially faster than trend AI progress due to quickly massively scaling up RL on agentic software engineering. One response I've heard is something like: RL scale-ups so far have used very crappy environments due to difficulty quickly sourcing enough decent (or even high quality) envi…
…
continue reading

1
“⿻ Plurality & 6pack.care” by Audrey Tang
23:57
23:57
Play later
Play later
Lists
Like
Liked
23:57(Cross-posted from speaker's notes of my talk at Deepmind today.) Good local time, everyone. I am Audrey Tang, 🇹🇼 Taiwan's Cyber Ambassador and first Digital Minister (2016-2024). It is an honor to be here with you all at Deepmind. When we discuss "AI" and "society," two futures compete. In one—arguably the default trajectory—AI supercharges confli…
…
continue reading

1
245 – AI Welfare, with Rob Long and Rosie Campbell of Eleos
1:33:54
1:33:54
Play later
Play later
Lists
Like
Liked
1:33:54Do we need to be concerned for the welfare of AIs today? What about the near future? Eleos AI Research is asking exactly that. LINKS Eleos AI Research People for the Ethical Treatment of Reinforcement Learners Bees Can’t Suffer? Lena, by qntm When AI Seems Conscious Experience Machines, Rob’s substack The War on General Computation WaPo on Blake Le…
…
continue reading

1
[Linkpost] “The Cats are On To Something” by Hastings
4:45
4:45
Play later
Play later
Lists
Like
Liked
4:45This is a link post. So the situation as it stands is that the fraction of the light cone expected to be filled with satisfied cats is not zero. This is already remarkable. What's more remarkable is that this was orchestrated starting nearly 5000 years ago. As far as I can tell there were three completely alien to-each-other intelligences operating…
…
continue reading

1
[Linkpost] “Open Global Investment as a Governance Model for AGI” by Nick Bostrom
2:13
2:13
Play later
Play later
Lists
Like
Liked
2:13This is a link post. I've seen many prescriptive contributions to AGI governance take the form of proposals for some radically new structure. Some call for a Manhattan project, others for the creation of a new international organization, etc. The OGI model, instead, is basically the status quo. More precisely, it is a model to which the status quo …
…
continue reading

1
“Will Any Old Crap Cause Emergent Misalignment?” by J Bostock
8:39
8:39
Play later
Play later
Lists
Like
Liked
8:39The following work was done independently by me in an afternoon and basically entirely vibe-coded with Claude. Code and instructions to reproduce can be found here. Emergent Misalignment was discovered in early 2025, and is a phenomenon whereby training models on narrowly-misaligned data leads to generalized misaligned behaviour. Betley et. al. (20…
…
continue reading

1
“AI Induced Psychosis: A shallow investigation” by Tim Hua
56:46
56:46
Play later
Play later
Lists
Like
Liked
56:46“This is a Copernican-level shift in perspective for the field of AI safety.” - Gemini 2.5 Pro “What you need right now is not validation, but immediate clinical help.” - Kimi K2 Two Minute Summary There have been numerous media reports of AI-driven psychosis, where AIs validate users’ grandiose delusions and tell users to ignore their friends’ and…
…
continue reading

1
“Before LLM Psychosis, There Was Yes-Man Psychosis” by johnswentworth
5:26
5:26
Play later
Play later
Lists
Like
Liked
5:26A studio executive has no beliefs That's the way of a studio system We've bowed to every rear of all the studio chiefs And you can bet your ass we've kissed 'em Even the birds in the Hollywood hills Know the secret to our success It's those magical words that pay the bills Yes, yes, yes, and yes! “Don’t Say Yes Until I Finish Talking”, from SMASH S…
…
continue reading

1
“Training a Reward Hacker Despite Perfect Labels” by ariana_azarbal, vgillioz, TurnTrout
13:19
13:19
Play later
Play later
Lists
Like
Liked
13:19Summary: Perfectly labeled outcomes in training can still boost reward hacking tendencies in generalization. This can hold even when the train/test sets are drawn from the exact same distribution. We induce this surprising effect via a form of context distillation, which we call re-contextualization: Generate model completions with a hack-encouragi…
…
continue reading

1
“Banning Said Achmiz (and broader thoughts on moderation)” by habryka
51:47
51:47
Play later
Play later
Lists
Like
Liked
51:47It's been roughly 7 years since the LessWrong user-base voted on whether it's time to close down shop and become an archive, or to move towards the LessWrong 2.0 platform, with me as head-admin. For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good Les…
…
continue reading

1
“Underdog bias rules everything around me” by Richard_Ngo
13:26
13:26
Play later
Play later
Lists
Like
Liked
13:26People very often underrate how much power they (and their allies) have, and overrate how much power their enemies have. I call this “underdog bias”, and I think it's the most important cognitive bias for understanding modern society. I’ll start by describing a closely-related phenomenon. The hostile media effect is a well-known bias whereby people…
…
continue reading

1
“Epistemic advantages of working as a moderate” by Buck
5:59
5:59
Play later
Play later
Lists
Like
Liked
5:59Many people who are concerned about existential risk from AI spend their time advocating for radical changes to how AI is handled. Most notably, they advocate for costly restrictions on how AI is developed now and in the future, e.g. the Pause AI people or the MIRI people. In contrast, I spend most of my time thinking about relatively cheap interve…
…
continue reading

1
“Four ways Econ makes people dumber re: future AI” by Steven Byrnes
14:01
14:01
Play later
Play later
Lists
Like
Liked
14:01(Cross-posted from X, intended for a general audience.) There's a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most things, but counterproductive for thinking about AGI. Here are 4 examples. Longpost: THE FIRST PIECE of Econ anti-pedago…
…
continue reading

1
“Should you make stone tools?” by Alex_Altair
6:02
6:02
Play later
Play later
Lists
Like
Liked
6:02Knowing how evolution works gives you an enormously powerful tool to understand the living world around you and how it came to be that way. (Though it's notoriously hard to use this tool correctly, to the point that I think people mostly shouldn't try it use it when making substantial decisions.) The simple heuristic is "other people died because t…
…
continue reading

1
“My AGI timeline updates from GPT-5 (and 2025 so far)” by ryan_greenblatt
7:26
7:26
Play later
Play later
Lists
Like
Liked
7:26As I discussed in a prior post, I felt like there were some reasonably compelling arguments for expecting very fast AI progress in 2025 (especially on easily verified programming tasks). Concretely, this might have looked like reaching 8 hour 50% reliability horizon lengths on METR's task suite[1] by now due to greatly scaling up RL and getting lar…
…
continue reading

1
244 – How and Why to Form a Church, with Andrew Willsen
1:23:13
1:23:13
Play later
Play later
Lists
Like
Liked
1:23:13Andrew Willsen tells us how incorporating as a church allows you to navigate modernity, and gives us the basic steps to doing so. LINKS Andrew’s church substack – The Church of the Infinite Game To incorporate in CA file ARTS-PB-501(c)(3) CA Form FTB3500 booklet IRS document 1828 (see also the summary) Nonprofit Compliance Checklist The Ethical Cul…
…
continue reading

1
“Hyperbolic model fits METR capabilities estimate worse than exponential model” by gjm
8:16
8:16
Play later
Play later
Lists
Like
Liked
8:16This is a response to https://www.lesswrong.com/posts/mXa66dPR8hmHgndP5/hyperbolic-trend-with-upcoming-singularity-fits-metr which claims that a hyperbolic model, complete with an actual singularity in the near future, is a better fit for the METR time-horizon data than a simple exponential model. I think that post has a serious error in it and its…
…
continue reading

1
“My Interview With Cade Metz on His Reporting About Lighthaven” by Zack_M_Davis
10:06
10:06
Play later
Play later
Lists
Like
Liked
10:06On 12 August 2025, I sat down with New York Times reporter Cade Metz to discuss some criticisms of his 4 August 2025 article, "The Rise of Silicon Valley's Techno-Religion". The transcript below has been edited for clarity. ZMD: In accordance with our meetings being on the record in both directions, I have some more questions for you. I did not rea…
…
continue reading

1
“Church Planting: When Venture Capital Finds Jesus” by Elizabeth
31:18
31:18
Play later
Play later
Lists
Like
Liked
31:18I’m going to describe a Type Of Guy starting a business, and you’re going to guess the business: The founder is very young, often under 25. He might work alone or with a founding team, but when he tells the story of the founding it will always have him at the center. He has no credentials for this business. This business has a grand vision, which h…
…
continue reading

1
“Somebody invented a better bookmark” by Alex_Altair
3:35
3:35
Play later
Play later
Lists
Like
Liked
3:35This will only be exciting to those of us who still read physical paper books. But like. Guys. They did it. They invented the perfect bookmark. Classic paper bookmarks fall out easily. You have to put them somewhere while you read the book. And they only tell you that you left off reading somewhere in that particular two-page spread. Enter the Book…
…
continue reading