BBC Radio 5 live’s award winning gaming podcast, discussing the world of video games and games culture.
…
continue reading
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
“Interpretability Will Not Reliably Find Deceptive AI” by Neel Nanda
MP3•Episode home
Manage episode 480626805 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
(Disclaimer: Post written in a personal capacity. These are personal hot takes and do not in any way represent my employer's views.)
TL;DR: I do not think we will produce high reliability methods to evaluate or monitor the safety of superintelligent systems via current research paradigms, with interpretability or otherwise. Interpretability seems a valuable tool here and remains worth investing in, as it will hopefully increase the reliability we can achieve. However, interpretability should be viewed as part of an overall portfolio of defences: a layer in a defence-in-depth strategy. It is not the one thing that will save us, and it still won’t be enough for high reliability.
Introduction
There's a common, often implicit, argument made in AI safety discussions: interpretability is presented as the only reliable path forward for detecting deception in advanced AI - among many other sources it was argued for in [...]
---
Outline:
(00:55) Introduction
(02:57) High Reliability Seems Unattainable
(05:12) Why Won't Interpretability be Reliable?
(07:47) The Potential of Black-Box Methods
(08:48) The Role of Interpretability
(12:02) Conclusion
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
May 4th, 2025
Source:
https://www.lesswrong.com/posts/PwnadG4BFjaER3MGf/interpretability-will-not-reliably-find-deceptive-ai
---
Narrated by TYPE III AUDIO.
…
continue reading
TL;DR: I do not think we will produce high reliability methods to evaluate or monitor the safety of superintelligent systems via current research paradigms, with interpretability or otherwise. Interpretability seems a valuable tool here and remains worth investing in, as it will hopefully increase the reliability we can achieve. However, interpretability should be viewed as part of an overall portfolio of defences: a layer in a defence-in-depth strategy. It is not the one thing that will save us, and it still won’t be enough for high reliability.
Introduction
There's a common, often implicit, argument made in AI safety discussions: interpretability is presented as the only reliable path forward for detecting deception in advanced AI - among many other sources it was argued for in [...]
---
Outline:
(00:55) Introduction
(02:57) High Reliability Seems Unattainable
(05:12) Why Won't Interpretability be Reliable?
(07:47) The Potential of Black-Box Methods
(08:48) The Role of Interpretability
(12:02) Conclusion
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
May 4th, 2025
Source:
https://www.lesswrong.com/posts/PwnadG4BFjaER3MGf/interpretability-will-not-reliably-find-deceptive-ai
---
Narrated by TYPE III AUDIO.
505 episodes
MP3•Episode home
Manage episode 480626805 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
(Disclaimer: Post written in a personal capacity. These are personal hot takes and do not in any way represent my employer's views.)
TL;DR: I do not think we will produce high reliability methods to evaluate or monitor the safety of superintelligent systems via current research paradigms, with interpretability or otherwise. Interpretability seems a valuable tool here and remains worth investing in, as it will hopefully increase the reliability we can achieve. However, interpretability should be viewed as part of an overall portfolio of defences: a layer in a defence-in-depth strategy. It is not the one thing that will save us, and it still won’t be enough for high reliability.
Introduction
There's a common, often implicit, argument made in AI safety discussions: interpretability is presented as the only reliable path forward for detecting deception in advanced AI - among many other sources it was argued for in [...]
---
Outline:
(00:55) Introduction
(02:57) High Reliability Seems Unattainable
(05:12) Why Won't Interpretability be Reliable?
(07:47) The Potential of Black-Box Methods
(08:48) The Role of Interpretability
(12:02) Conclusion
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
May 4th, 2025
Source:
https://www.lesswrong.com/posts/PwnadG4BFjaER3MGf/interpretability-will-not-reliably-find-deceptive-ai
---
Narrated by TYPE III AUDIO.
…
continue reading
TL;DR: I do not think we will produce high reliability methods to evaluate or monitor the safety of superintelligent systems via current research paradigms, with interpretability or otherwise. Interpretability seems a valuable tool here and remains worth investing in, as it will hopefully increase the reliability we can achieve. However, interpretability should be viewed as part of an overall portfolio of defences: a layer in a defence-in-depth strategy. It is not the one thing that will save us, and it still won’t be enough for high reliability.
Introduction
There's a common, often implicit, argument made in AI safety discussions: interpretability is presented as the only reliable path forward for detecting deception in advanced AI - among many other sources it was argued for in [...]
---
Outline:
(00:55) Introduction
(02:57) High Reliability Seems Unattainable
(05:12) Why Won't Interpretability be Reliable?
(07:47) The Potential of Black-Box Methods
(08:48) The Role of Interpretability
(12:02) Conclusion
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
May 4th, 2025
Source:
https://www.lesswrong.com/posts/PwnadG4BFjaER3MGf/interpretability-will-not-reliably-find-deceptive-ai
---
Narrated by TYPE III AUDIO.
505 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.