Player FM - Internet Radio Done Right
902 subscribers
Checked 11d ago
Added eight years ago
Content provided by Gus Docker and Future of Life Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gus Docker and Future of Life Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Tom Davidson on How Quickly AI Could Automate the Economy
Manage episode 376430700 series 1334308
Content provided by Gus Docker and Future of Life Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gus Docker and Future of Life Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky. Timestamps: 00:00 The current pace of AI 03:58 Near-term risks from AI 09:34 Historical analogies to AI 13:58 AI benchmarks VS economic impact 18:30 AI takeoff speed and bottlenecks 31:09 Tom's model of AI takeoff speed 36:21 How AI could automate AI research 41:49 Bottlenecks to AI automating AI hardware 46:15 How much of AI research is automated now? 48:26 From 20% to 100% automation 53:24 AI takeoff in 3 years 1:09:15 Economic impacts of fast AI takeoff 1:12:51 Bottlenecks slowing AI takeoff 1:20:06 Does the market predict a fast AI takeoff? 1:25:39 "Hard to avoid AGI by 2060" 1:27:22 Risks from AI over the next 20 years 1:31:43 AI progress without more compute 1:44:01 What if AI models fail safety evaluations? 1:45:33 Cybersecurity at AI companies 1:47:33 Will AI turn out well for humanity? 1:50:15 AI and board games
…
continue reading
231 episodes
Manage episode 376430700 series 1334308
Content provided by Gus Docker and Future of Life Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gus Docker and Future of Life Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky. Timestamps: 00:00 The current pace of AI 03:58 Near-term risks from AI 09:34 Historical analogies to AI 13:58 AI benchmarks VS economic impact 18:30 AI takeoff speed and bottlenecks 31:09 Tom's model of AI takeoff speed 36:21 How AI could automate AI research 41:49 Bottlenecks to AI automating AI hardware 46:15 How much of AI research is automated now? 48:26 From 20% to 100% automation 53:24 AI takeoff in 3 years 1:09:15 Economic impacts of fast AI takeoff 1:12:51 Bottlenecks slowing AI takeoff 1:20:06 Does the market predict a fast AI takeoff? 1:25:39 "Hard to avoid AGI by 2060" 1:27:22 Risks from AI over the next 20 years 1:31:43 AI progress without more compute 1:44:01 What if AI models fail safety evaluations? 1:45:33 Cybersecurity at AI companies 1:47:33 Will AI turn out well for humanity? 1:50:15 AI and board games
…
continue reading
231 episodes
All episodes
×F
Future of Life Institute Podcast


1 Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding) 1:02:32
1:02:32
Play Later
Play Later
Lists
Like
Liked1:02:32
On this episode, Jeffrey Ding joins me to discuss diffusion of AI versus AI innovation, how US-China dynamics shape AI’s global trajectory, and whether there is an AI arms race between the two powers. We explore Chinese attitudes toward AI safety, the level of concentration of AI development, and lessons from historical technology diffusion. Jeffrey also shares insights from translating Chinese AI writings and the potential of automating translations to bridge knowledge gaps. You can learn more about Jeffrey’s work at: https://jeffreyjding.github.io Timestamps: 00:00:00 Preview and introduction 00:01:36 A US-China AI arms race? 00:10:58 Attitudes to AI safety in China 00:17:53 Diffusion of AI 00:25:13 Innovation without diffusion 00:34:29 AI development concentration 00:41:40 Learning from the history of technology 00:47:48 Translating Chinese AI writings 00:55:36 Automating translation of AI writings…
F
Future of Life Institute Podcast


1 How Will We Cooperate with AIs? (with Allison Duettmann) 1:36:02
1:36:02
Play Later
Play Later
Lists
Like
Liked1:36:02
On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children. You can learn more about Allison's work at: https://foresight.org Timestamps: 00:00:00 Preview 00:01:07 Centralized AI versus decentralized AI 00:13:02 Risks from decentralized AI 00:25:39 International AI governance 00:39:52 Cooperation with future AIs 00:53:51 AI for decision-making 01:05:58 Capital intensity of AI 01:09:11 Lessons from history 01:15:50 Future space law and property rights 01:27:28 Is technology invented or discovered? 01:32:34 Children in the age of AI…
F
Future of Life Institute Podcast


1 Brain-like AGI and why it's Dangerous (with Steven Byrnes) 1:13:13
1:13:13
Play Later
Play Later
Lists
Like
Liked1:13:13
On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies. You can learn more about Steven's work at: https://sjbyrnes.com/agi.html Timestamps: 00:00 Preview 00:54 Brain-like AGI Safety 13:16 Controlled AGI versus Social-instinct AGI 19:12 Learning from the brain 28:36 Why is brain-like AI the most likely path to AGI? 39:23 Honesty in AI models 44:02 How to help with brain-like AGI safety 53:36 AI traits with both positive and negative effects 01:02:44 Different AI safety strategies…
F
Future of Life Institute Podcast


1 How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil) 1:34:33
1:34:33
Play Later
Play Later
Lists
Like
Liked1:34:33
On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines. You can learn more about Ege's work at https://epoch.ai Timestamps: 00:00:00 – Preview and introduction 00:02:59 – Compute scaling and automation - GATE model 00:13:12 – Evolution, Brain Efficiency, and AGI Compute Requirements 00:29:49 – Broad Automation vs. R&D-Focused AI Deployment 00:47:19 – AI, Wages, and Labor Market Transitions 00:59:54 – Training Agentic Models and Long-Term Planning Capabilities 01:06:56 – Moravec’s Paradox and Automation of Human Skills 01:13:59 – Which Jobs Are Most Vulnerable to AI? 01:33:00 – Timeline Extremes: What Could Change AI Forecasts?…
F
Future of Life Institute Podcast


1 Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz) 2:23:12
2:23:12
Play Later
Play Later
Lists
Like
Liked2:23:12
In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research. 00:00 Nicholas Carlini's contributions to cybersecurity 08:19 Understanding attack strategies 29:39 High-dimensional spaces and attack intuitions 51:00 Challenges in open-source model safety 01:00:11 Unlearning and fact editing in models 01:10:55 Adversarial examples and human robustness 01:37:03 Cryptography and AI robustness 01:55:51 Scaling AI security research…
F
Future of Life Institute Podcast


1 Keep the Future Human (with Anthony Aguirre) 1:21:03
1:21:03
Play Later
Play Later
Lists
Like
Liked1:21:03
On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn't have to. Learn how we can keep the future human and experience the extraordinary benefits of Tool AI... Timestamps: 00:00 What situation is humanity in? 05:00 Why AI progress is fast 09:56 Tool AI instead of AGI 15:56 The incentives of AI companies 19:13 Governments can coordinate a slowdown 25:20 The need for international coordination 31:59 Monitoring training runs 39:10 Do reasoning models undermine compute governance? 49:09 Why isn't alignment enough? 59:42 How do we decide if we want AGI? 01:02:18 Disagreement about AI 01:11:12 The early days of AI risk…
F
Future of Life Institute Podcast


1 We Created AI. Why Don't We Understand It? (with Samir Varma) 1:16:15
1:16:15
Play Later
Play Later
Lists
Like
Liked1:16:15
On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs could develop emotions or consciousness. You can find out more about Samir's work here: https://samirvarma.com Timestamps: 00:00 AIs with free will? 08:00 Can we predict AI behavior? 11:38 AI psychology 16:24 Which concepts will AIs use? 20:19 Will we collaborate with AIs? 26:16 Will we trade with AIs? 31:40 Training data for robots 34:00 AI in finance 39:55 How much of trading is automated? 49:00 AI in biology and complex systems 59:31 Will our skills atrophy? 01:02:55 Levels of scientific explanation 01:06:12 AIs with emotions and consciousness? 01:12:12 Why can't we predict recessions?…
F
Future of Life Institute Podcast


1 Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish) 1:22:33
1:22:33
Play Later
Play Later
Lists
Like
Liked1:22:33
On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us. We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check out that study here: https://palisaderesearch.org/blog/specification-gaming Timestamps: 00:00 The pace of AI progress 04:15 How we might lose control 07:23 Why are AIs sometimes dumb? 12:52 Benchmarks vs real world 19:11 Loss of control scenarios 26:36 Why would AI turn against us? 30:35 AIs hacking chess 36:25 Why didn't more advanced AIs hack? 41:39 Creating honest AIs 49:44 AI attackers vs AI defenders 58:27 How good is security at AI companies? 01:03:37 A sense of urgency 01:10:11 What should we do? 01:15:54 Skepticism about AI progress…
F
Future of Life Institute Podcast


1 Ann Pace on using Biobanking and Genomic Sequencing to Conserve Biodiversity 46:09
46:09
Play Later
Play Later
Lists
Like
Liked46:09
Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts. You can learn more about Ann's work here: https://www.wiseancestors.org Timestamps: 00:00 What is Wise Ancestors? 04:27 Recovering after catastrophes 11:40 Decentralized science 18:28 Upfront benefit-sharing 26:30 Local communities 32:44 Recreating optimal environments 38:57 Cross-cultural collaboration…
F
Future of Life Institute Podcast


1 Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective 1:25:56
1:25:56
Play Later
Play Later
Lists
Like
Liked1:25:56
Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI. You can learn more about Michael's work here: https://catholic.tech/academics/faculty/michael-baggot Timestamps: 00:00 Meta-narratives and transhumanism 15:28 Advanced AI and religious communities 27:22 Superintelligence 38:31 Countercultures and technology 52:38 Christian perspectives and tradition 01:05:20 God-like artificial intelligence 01:13:15 A positive vision for AI…
F
Future of Life Institute Podcast


1 David Dalrymple on Safeguarded, Transformative AI 1:40:06
1:40:06
Play Later
Play Later
Lists
Like
Liked1:40:06
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware. You can learn more about David's work at ARIA here: https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/ Timestamps: 00:00 What is Safeguarded AI? 16:28 Implementing Safeguarded AI 22:58 Can we trust Safeguarded AIs? 31:00 Formalizing more of the world 37:34 The performance cost of verified AI 47:58 Changing attitudes towards AI 52:39 Flexible Hardware-Enabled Guarantees 01:24:15 Mind uploading 01:36:14 Lessons from David's early life…
F
Future of Life Institute Podcast


1 Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters 1:09:26
1:09:26
Play Later
Play Later
Lists
Like
Liked1:09:26
Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com Timestamps: 00:00 What is GiveDirectly? 15:04 AI for targeting cash transfers 29:39 AI for predicting natural disasters 46:04 How scalable is GiveDirectly's AI approach? 58:10 Decentralized vs. centralized data collection 1:04:30 Dream scenario for GiveDirectly…
F
Future of Life Institute Podcast


1 Nathan Labenz on the State of AI and Progress since GPT-4 3:20:04
3:20:04
Play Later
Play Later
Lists
Like
Liked3:20:04
Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4. You can find Nathan's podcast here: https://www.cognitiverevolution.ai Timestamps: 00:00 AI progress since GPT-4 10:50 Multimodality 19:06 Low-cost models 27:58 Coding versus medicine/law 36:09 AI agents 45:29 How much are people using AI? 53:39 Open source 01:15:22 AI industry analysis 01:29:27 Are some AI models kept internal? 01:41:00 Money is not the limiting factor in AI 01:59:43 AI and biology 02:08:42 Robotics and self-driving 02:24:14 Inference-time compute 02:31:56 AI governance 02:36:29 Big-picture overview of AI progress and safety…
F
Future of Life Institute Podcast


1 Connor Leahy on Why Humanity Risks Extinction from AGI 1:58:50
1:58:50
Play Later
Play Later
Lists
Like
Liked1:58:50
Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this. Here's the document we discuss in the episode: https://www.thecompendium.ai Timestamps: 00:00 The Compendium 15:25 The motivations of AGI corps 31:17 AI is grown, not written 52:59 A science of intelligence 01:07:50 Jobs, work, and AGI 01:23:19 Superintelligence 01:37:42 Open-source AI 01:45:07 What can we do?…
F
Future of Life Institute Podcast


1 Suzy Shepherd on Imagining Superintelligence and "Writing Doom" 1:03:08
1:03:08
Play Later
Play Later
Lists
Like
Liked1:03:08
Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world. Here's Writing Doom: https://www.youtube.com/watch?v=xfMQ7hzyFW4 Timestamps: 00:00 Writing Doom 08:23 Humor in Writing Doom 13:31 Concise writing 18:37 Getting feedback 27:02 Alternative characters 36:31 Popular video formats 46:53 AI in filmmaking 49:52 Meaning in the future…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.