Go offline with the Player FM app!
#64 – Michael Aird on Strategies for Reducing AI Existential Risk
Manage episode 365474509 series 2607952
Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52.
In this episode, we talk about:
- The basic case for working on existential risk from AI
- How to begin figuring out what to do to reduce the risks
- Threat models for the risks of advanced AI
- 'Theories of victory' for how the world mitigates the risks
- 'Intermediate goals' in AI governance
- What useful (and less useful) research looks like for reducing AI x-risk
- Practical advice for usefully contributing to efforts to reduce existential risk from AI
- Resources for getting started and finding job openings
Key links:
- Apply to be a Compute Governance Researcher or Research Assistant at Rethink Priorities (applications open until June 12, 2023)
- Rethink Priority's survey on intermediate goals in AI governance
- The Rethink Priorities newsletter
- The Rethink Priorities tab on the Effective Altruism Forum
- Some AI Governance Research Ideas compiled by Markus Anderljung & Alexis Carlier
- Strategic Perspectives on Long-term AI Governance by Matthijs Maas
- Michael's posts on the Effective Altruism Forum (under the username "MichaelA")
- The 80,000 Hours job board
90 episodes
Manage episode 365474509 series 2607952
Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52.
In this episode, we talk about:
- The basic case for working on existential risk from AI
- How to begin figuring out what to do to reduce the risks
- Threat models for the risks of advanced AI
- 'Theories of victory' for how the world mitigates the risks
- 'Intermediate goals' in AI governance
- What useful (and less useful) research looks like for reducing AI x-risk
- Practical advice for usefully contributing to efforts to reduce existential risk from AI
- Resources for getting started and finding job openings
Key links:
- Apply to be a Compute Governance Researcher or Research Assistant at Rethink Priorities (applications open until June 12, 2023)
- Rethink Priority's survey on intermediate goals in AI governance
- The Rethink Priorities newsletter
- The Rethink Priorities tab on the Effective Altruism Forum
- Some AI Governance Research Ideas compiled by Markus Anderljung & Alexis Carlier
- Strategic Perspectives on Long-term AI Governance by Matthijs Maas
- Michael's posts on the Effective Altruism Forum (under the username "MichaelA")
- The 80,000 Hours job board
90 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.