The Great Simplification with Nate Hagens: Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy
Manage episode 490763880 series 3506872
Recently, the risks about Artificial Intelligence and the need for ‘alignment’ have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there’s been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?
In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer’ – AI generated content that crowds out true human creations, propelled by algorithms that can’t tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies.
What kinds of policy and regulatory approaches could help slow down AI’s acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology’s impacts on mental health, meaning, and societal well-being?
(Conversation recorded on May 21st, 2025)
About Connor Leahy:
Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI.
Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH.
Watch this video episode on YouTube
Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.
---
Support The Institute for the Study of Energy and Our Future
167 episodes