Artwork

Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#3: digital sentience, AGI ruin, and forecasting track records

34:05
 
Share
 

Manage episode 333444203 series 3340630
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Episode Notes

Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

00:00 Welcome to Future Matters 01:11 Long — Lots of links on LaMDA 01:48 Lovely — Do we need a better understanding of 'progress'? 02:11 Base — Things usually end slowly 02:47 Yudkowsky — AGI ruin: a list of lethalities 03:38 Christiano — Where I agree and disagree with Eliezer 04:31 Garfinkel — On deference and Yudkowsky's AI risk estimates 05:13 Karnofsky — The track record of futurists seems … fine 06:08 Aaronson — Joining OpenAI to work on AI safety 06:52 Shiller — The importance of getting digital consciousness right 07:53 Pilz's — Germans opinions on translations of “longtermism” 08:33 Karnofsky — AI could defeat all of us combined 09:36 Beckstead — Future Fund June 2022 update 11:02 News 14:45 Conversation with Robert Long 15:05 What artificial sentience is and why it’s important 16:56 “The Big Question” and the assumptions on which it depends 19:30 How problems arising from AI agency and AI sentience compare in terms of importance, neglectedness, tractability 21:57 AI sentience and the alignment problem 24:01 The Blake Lemoine saga and the quality of the ensuing public discussion 26:29 The risks of AI sentience becoming lumped in with certain other views 27:55 How to deal with objections coming from different frameworks 28:50 The analogy between AI sentience and animal welfare 30:10 The probability of large language models like LaMDA and GPT-3 being sentient 32:41 Are verbal reports strong evidence for sentience?

  continue reading

9 episodes

Artwork
iconShare
 
Manage episode 333444203 series 3340630
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Episode Notes

Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

00:00 Welcome to Future Matters 01:11 Long — Lots of links on LaMDA 01:48 Lovely — Do we need a better understanding of 'progress'? 02:11 Base — Things usually end slowly 02:47 Yudkowsky — AGI ruin: a list of lethalities 03:38 Christiano — Where I agree and disagree with Eliezer 04:31 Garfinkel — On deference and Yudkowsky's AI risk estimates 05:13 Karnofsky — The track record of futurists seems … fine 06:08 Aaronson — Joining OpenAI to work on AI safety 06:52 Shiller — The importance of getting digital consciousness right 07:53 Pilz's — Germans opinions on translations of “longtermism” 08:33 Karnofsky — AI could defeat all of us combined 09:36 Beckstead — Future Fund June 2022 update 11:02 News 14:45 Conversation with Robert Long 15:05 What artificial sentience is and why it’s important 16:56 “The Big Question” and the assumptions on which it depends 19:30 How problems arising from AI agency and AI sentience compare in terms of importance, neglectedness, tractability 21:57 AI sentience and the alignment problem 24:01 The Blake Lemoine saga and the quality of the ensuing public discussion 26:29 The risks of AI sentience becoming lumped in with certain other views 27:55 How to deal with objections coming from different frameworks 28:50 The analogy between AI sentience and animal welfare 30:10 The probability of large language models like LaMDA and GPT-3 being sentient 32:41 Are verbal reports strong evidence for sentience?

  continue reading

9 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play