901 subscribers
Go offline with the Player FM app!
Andrea Miotti on a Narrow Path to Safe, Transformative AI
Manage episode 446779370 series 1334308
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like.
Here's the document we discuss in the episode:
https://www.narrowpath.co
Timestamps:
00:00 A Narrow Path
06:10 Can we predict future AI capabilities?
11:10 Risks from current AI development
17:56 The benefits of narrow AI
22:30 Against self-improving AI
28:00 Cybersecurity at AI companies
33:55 Unbounded AI
39:31 Global coordination on AI safety
49:43 Monitoring training runs
01:00:20 Benefits of cooperation
01:04:58 A science of intelligence
01:25:36 How you can help
231 episodes
Manage episode 446779370 series 1334308
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like.
Here's the document we discuss in the episode:
https://www.narrowpath.co
Timestamps:
00:00 A Narrow Path
06:10 Can we predict future AI capabilities?
11:10 Risks from current AI development
17:56 The benefits of narrow AI
22:30 Against self-improving AI
28:00 Cybersecurity at AI companies
33:55 Unbounded AI
39:31 Global coordination on AI safety
49:43 Monitoring training runs
01:00:20 Benefits of cooperation
01:04:58 A science of intelligence
01:25:36 How you can help
231 episodes
All episodes
×

1 Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding) 1:02:32


1 How Will We Cooperate with AIs? (with Allison Duettmann) 1:36:02


1 Brain-like AGI and why it's Dangerous (with Steven Byrnes) 1:13:13


1 How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil) 1:34:33


1 Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz) 2:23:12


1 Keep the Future Human (with Anthony Aguirre) 1:21:03


1 We Created AI. Why Don't We Understand It? (with Samir Varma) 1:16:15


1 Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish) 1:22:33


1 Ann Pace on using Biobanking and Genomic Sequencing to Conserve Biodiversity 46:09


1 Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective 1:25:56


1 David Dalrymple on Safeguarded, Transformative AI 1:40:06


1 Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters 1:09:26


1 Nathan Labenz on the State of AI and Progress since GPT-4 3:20:04


1 Connor Leahy on Why Humanity Risks Extinction from AGI 1:58:50


1 Suzy Shepherd on Imagining Superintelligence and "Writing Doom" 1:03:08
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.