Artwork

Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#6: FTX collapse, value lock-in, and counterarguments to AI x-risk

37:47
 
Share
 

Manage episode 351118446 series 3340630
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Future Matters is a newsletter about longtermism by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish. 00:00 Welcome to Future Matters. 01:05 A message to our readers. 01:54 Finnveden, Riedel & Shulman — Artificial general intelligence and lock-in. 02:33 Grace — Counterarguments to the basic AI x-risk case. 03:17 Grace — Let’s think about slowing down AI. 04:18 Piper — Review of What We Owe the Future. 05:04 Clare & Martin — How bad could a war get? 05:26 Rodríguez — What is the likelihood that civilizational collapse would cause technological stagnation? 06:28 Ord — What kind of institution is needed for existential security? 07:00 Ezell — A lunar backup record of humanity. 07:37 Tegmark — Why I think there's a one-in-six chance of an imminent global nuclear war. 08:31 Hobbhahn — The next decades might be wild. 08:54 Karnosfky — Why would AI "aim" to defeat humanity? 09:44 Karnosfky — High-level hopes for AI alignment. 10:27 Karnosfky — AI safety seems hard to measure. 11:10 Karnosfky — Racing through a minefield. 12:07 Barak & Edelman — AI will change the world, but won’t take it over by playing “3-dimensional chess”. 12:53 Our World in Data — New page on artificial intelligence. 14:06 Luu — Futurist prediction methods and accuracy. 14:38 Kenton et al. — Clarifying AI x-risk. 15:39 Wyg — A theologian's response to anthropogenic existential risk. 16:12 Wilkinson — The unexpected value of the future. 16:38 Aaronson — Talk on AI safety. 17:20 Tarsney & Wilkinson — Longtermism in an infinite world. 18:13 One-line summaries. 25:01 News. 28:29 Conversation with Katja Grace. 28:42 Could you walk us through the basic case for existential risk from AI? 29:42 What are the most important weak points in the argument? 30:37 Comparison between misaligned AI and corporations. 32:07 How do you think people in the AI safety community are thinking about this basic case wrong? 33:23 If these arguments were supplemented with clearer claims, does that rescue some of the plausibility? 34:30 Does the disagreement about basic intuitive case for AI risk undermine the case itself? 35:34 Could describe how your views on AI risk have changed over time? 36:14 Could you quantify your credence in the probability of existential catastrophe from AI? 36:52 When you reached that number, did it surprise you?

  continue reading

9 episodes

Artwork
iconShare
 
Manage episode 351118446 series 3340630
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Future Matters is a newsletter about longtermism by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish. 00:00 Welcome to Future Matters. 01:05 A message to our readers. 01:54 Finnveden, Riedel & Shulman — Artificial general intelligence and lock-in. 02:33 Grace — Counterarguments to the basic AI x-risk case. 03:17 Grace — Let’s think about slowing down AI. 04:18 Piper — Review of What We Owe the Future. 05:04 Clare & Martin — How bad could a war get? 05:26 Rodríguez — What is the likelihood that civilizational collapse would cause technological stagnation? 06:28 Ord — What kind of institution is needed for existential security? 07:00 Ezell — A lunar backup record of humanity. 07:37 Tegmark — Why I think there's a one-in-six chance of an imminent global nuclear war. 08:31 Hobbhahn — The next decades might be wild. 08:54 Karnosfky — Why would AI "aim" to defeat humanity? 09:44 Karnosfky — High-level hopes for AI alignment. 10:27 Karnosfky — AI safety seems hard to measure. 11:10 Karnosfky — Racing through a minefield. 12:07 Barak & Edelman — AI will change the world, but won’t take it over by playing “3-dimensional chess”. 12:53 Our World in Data — New page on artificial intelligence. 14:06 Luu — Futurist prediction methods and accuracy. 14:38 Kenton et al. — Clarifying AI x-risk. 15:39 Wyg — A theologian's response to anthropogenic existential risk. 16:12 Wilkinson — The unexpected value of the future. 16:38 Aaronson — Talk on AI safety. 17:20 Tarsney & Wilkinson — Longtermism in an infinite world. 18:13 One-line summaries. 25:01 News. 28:29 Conversation with Katja Grace. 28:42 Could you walk us through the basic case for existential risk from AI? 29:42 What are the most important weak points in the argument? 30:37 Comparison between misaligned AI and corporations. 32:07 How do you think people in the AI safety community are thinking about this basic case wrong? 33:23 If these arguments were supplemented with clearer claims, does that rescue some of the plausibility? 34:30 Does the disagreement about basic intuitive case for AI risk undermine the case itself? 35:34 Could describe how your views on AI risk have changed over time? 36:14 Could you quantify your credence in the probability of existential catastrophe from AI? 36:52 When you reached that number, did it surprise you?

  continue reading

9 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play