Artwork

Content provided by Fin Moorhouse and Luca Righetti, Fin Moorhouse, and Luca Righetti. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Fin Moorhouse and Luca Righetti, Fin Moorhouse, and Luca Righetti or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#64 – Michael Aird on Strategies for Reducing AI Existential Risk

3:12:56
 
Share
 

Manage episode 365474509 series 2607952
Content provided by Fin Moorhouse and Luca Righetti, Fin Moorhouse, and Luca Righetti. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Fin Moorhouse and Luca Righetti, Fin Moorhouse, and Luca Righetti or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52.

In this episode, we talk about:

  • The basic case for working on existential risk from AI
  • How to begin figuring out what to do to reduce the risks
  • Threat models for the risks of advanced AI
  • 'Theories of victory' for how the world mitigates the risks
  • 'Intermediate goals' in AI governance
  • What useful (and less useful) research looks like for reducing AI x-risk
  • Practical advice for usefully contributing to efforts to reduce existential risk from AI
  • Resources for getting started and finding job openings

Key links:

  continue reading

90 episodes

Artwork
iconShare
 
Manage episode 365474509 series 2607952
Content provided by Fin Moorhouse and Luca Righetti, Fin Moorhouse, and Luca Righetti. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Fin Moorhouse and Luca Righetti, Fin Moorhouse, and Luca Righetti or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52.

In this episode, we talk about:

  • The basic case for working on existential risk from AI
  • How to begin figuring out what to do to reduce the risks
  • Threat models for the risks of advanced AI
  • 'Theories of victory' for how the world mitigates the risks
  • 'Intermediate goals' in AI governance
  • What useful (and less useful) research looks like for reducing AI x-risk
  • Practical advice for usefully contributing to efforts to reduce existential risk from AI
  • Resources for getting started and finding job openings

Key links:

  continue reading

90 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play