Artwork

Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

20 - 'Reform' AI Alignment with Scott Aaronson

2:27:35
 
Share
 

Manage episode 360518652 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity theory to working on AI.

Note: this episode was recorded before this story (vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says) emerged of a man committing suicide after discussions with a language-model-based chatbot, that included discussion of the possibility of him killing himself.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

Topics we discuss, and timestamps:

- 0:00:36 - 'Reform' AI alignment

- 0:01:52 - Epistemology of AI risk

- 0:20:08 - Immediate problems and existential risk

- 0:24:35 - Aligning deceitful AI

- 0:30:59 - Stories of AI doom

- 0:34:27 - Language models

- 0:43:08 - Democratic governance of AI

- 0:59:35 - What would change Scott's mind

- 1:14:45 - Watermarking language model outputs

- 1:41:41 - Watermark key secrecy and backdoor insertion

- 1:58:05 - Scott's transition to AI research

- 2:03:48 - Theoretical computer science and AI alignment

- 2:14:03 - AI alignment and formalizing philosophy

- 2:22:04 - How Scott finds AI research

- 2:24:53 - Following Scott's research

The transcript: axrp.net/episode/2023/04/11/episode-20-reform-ai-alignment-scott-aaronson.html

Links to Scott's things:

- Personal website: scottaaronson.com

- Book, Quantum Computing Since Democritus: amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565/

- Blog, Shtetl-Optimized: scottaaronson.blog

Writings we discuss:

- Reform AI Alignment: scottaaronson.blog/?p=6821

- Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974

  continue reading

57 episodes

Artwork
iconShare
 
Manage episode 360518652 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity theory to working on AI.

Note: this episode was recorded before this story (vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says) emerged of a man committing suicide after discussions with a language-model-based chatbot, that included discussion of the possibility of him killing himself.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

Topics we discuss, and timestamps:

- 0:00:36 - 'Reform' AI alignment

- 0:01:52 - Epistemology of AI risk

- 0:20:08 - Immediate problems and existential risk

- 0:24:35 - Aligning deceitful AI

- 0:30:59 - Stories of AI doom

- 0:34:27 - Language models

- 0:43:08 - Democratic governance of AI

- 0:59:35 - What would change Scott's mind

- 1:14:45 - Watermarking language model outputs

- 1:41:41 - Watermark key secrecy and backdoor insertion

- 1:58:05 - Scott's transition to AI research

- 2:03:48 - Theoretical computer science and AI alignment

- 2:14:03 - AI alignment and formalizing philosophy

- 2:22:04 - How Scott finds AI research

- 2:24:53 - Following Scott's research

The transcript: axrp.net/episode/2023/04/11/episode-20-reform-ai-alignment-scott-aaronson.html

Links to Scott's things:

- Personal website: scottaaronson.com

- Book, Quantum Computing Since Democritus: amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565/

- Blog, Shtetl-Optimized: scottaaronson.blog

Writings we discuss:

- Reform AI Alignment: scottaaronson.blog/?p=6821

- Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974

  continue reading

57 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play