Artwork

Content provided by Daniel Faggella. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Faggella or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Jeremie and Edouard Harris - What Makes US-China Alignment Around AGI So Hard (US-China AGI Relations, Episode 2)

1:33:06
 
Share
 

Manage episode 503175880 series 3610999
Content provided by Daniel Faggella. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Faggella or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This is an interview with Jeremie and Edouard Harris, Canadian researchers with backgrounds in AI governance and national security consulting, and co-founders of Gladstone AI.
In this episode, Jeremie and Edouard explain why trusting China on AGI is dangerous, highlight ongoing espionage in Western labs, explore verification tools like tamper-proof chips, and argue that slowing China’s AI progress may be vital for safe alignment.
This the second installment of our "US-China AGI Relations" series - where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races.
This episode referred to the following other essays:
-- The International Governance of AI – We Unite or We Fight: https://emerj.com/international-governance-ai/
-- Potentia and Potestas: Achieving The Goldilocks Zone of AGI Governance: https://danfaggella.com/potestas
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/jf6Oy3C3mLA
See the full article from this episode: https://danfaggella.com/harris1
...
There are three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: danfaggella.com/trajectory
-- X: x.com/danfaggella
-- LinkedIn: linkedin.com/in/danfaggella
-- Newsletter: bit.ly/TrajectoryTw
-- YouTube: https://www.youtube.com/@trajectoryai

  continue reading

34 episodes

Artwork
iconShare
 
Manage episode 503175880 series 3610999
Content provided by Daniel Faggella. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Faggella or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This is an interview with Jeremie and Edouard Harris, Canadian researchers with backgrounds in AI governance and national security consulting, and co-founders of Gladstone AI.
In this episode, Jeremie and Edouard explain why trusting China on AGI is dangerous, highlight ongoing espionage in Western labs, explore verification tools like tamper-proof chips, and argue that slowing China’s AI progress may be vital for safe alignment.
This the second installment of our "US-China AGI Relations" series - where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races.
This episode referred to the following other essays:
-- The International Governance of AI – We Unite or We Fight: https://emerj.com/international-governance-ai/
-- Potentia and Potestas: Achieving The Goldilocks Zone of AGI Governance: https://danfaggella.com/potestas
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/jf6Oy3C3mLA
See the full article from this episode: https://danfaggella.com/harris1
...
There are three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: danfaggella.com/trajectory
-- X: x.com/danfaggella
-- LinkedIn: linkedin.com/in/danfaggella
-- Newsletter: bit.ly/TrajectoryTw
-- YouTube: https://www.youtube.com/@trajectoryai

  continue reading

34 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play