Artwork

Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#4: AI timelines, AGI risk, and existential risk from climate change

31:13
 
Share
 

Manage episode 336971955 series 3340630
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

00:00 Welcome to Future Matters 01:11 Steinhardt — AI forecasting: one year in 01:52 Davidson — Social returns to productivity growth 02:26 Brundage — Why AGI timeline research/discourse might be overrated 03:03 Cotra — Two-year update on my personal AI timelines 03:50 Grace — What do ML researchers think about AI in 2022? 04:43 Leike — On the windfall clause 05:35 Cotra — Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover 06:32 Maas — Introduction to strategic perspectives on long-term AI governance 06:52 Hadshar — How moral progress happens: the decline of footbinding as a case study 07:35 Trötzmüller — Why EAs are skeptical about AI safety 08:08 Schubert — Moral circle expansion isn’t the key value change we need 08:52 Šimčikas — Wild animal welfare in the far future 09:51 Heikkinen — Strong longtermism and the challenge from anti-aggregative moral views 10:28 Rational Animations — Video on Karnofsky's Most important century 11:23 Other research 12:47 News 15:00 Conversation with John Halstead 15:33 What level of emissions should we reasonably expect over the coming decades? 18:11 What do those emissions imply for warming? 20:52 How worried should we be about the risk of climate change from a longtermist perspective? 26:53 What is the probability of an existential catastrophe due to climate change? 27:06 Do you think EAs should fund modelling work of tail risks from climate change? 28:45 What would be the best use of funds?

  continue reading

9 episodes

Artwork
iconShare
 
Manage episode 336971955 series 3340630
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

00:00 Welcome to Future Matters 01:11 Steinhardt — AI forecasting: one year in 01:52 Davidson — Social returns to productivity growth 02:26 Brundage — Why AGI timeline research/discourse might be overrated 03:03 Cotra — Two-year update on my personal AI timelines 03:50 Grace — What do ML researchers think about AI in 2022? 04:43 Leike — On the windfall clause 05:35 Cotra — Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover 06:32 Maas — Introduction to strategic perspectives on long-term AI governance 06:52 Hadshar — How moral progress happens: the decline of footbinding as a case study 07:35 Trötzmüller — Why EAs are skeptical about AI safety 08:08 Schubert — Moral circle expansion isn’t the key value change we need 08:52 Šimčikas — Wild animal welfare in the far future 09:51 Heikkinen — Strong longtermism and the challenge from anti-aggregative moral views 10:28 Rational Animations — Video on Karnofsky's Most important century 11:23 Other research 12:47 News 15:00 Conversation with John Halstead 15:33 What level of emissions should we reasonably expect over the coming decades? 18:11 What do those emissions imply for warming? 20:52 How worried should we be about the risk of climate change from a longtermist perspective? 26:53 What is the probability of an existential catastrophe due to climate change? 27:06 Do you think EAs should fund modelling work of tail risks from climate change? 28:45 What would be the best use of funds?

  continue reading

9 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play