Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]
Manage episode 473954375 series 3610999
This is an interview with Max Tegmark, MIT professor, Founder of the Future of Humanity Institute, and author of Life 3.0.
This interview was recorded on-site at AI Safety Connect 2025, a side event from the AI Action Summit in Paris.
See the full article from this episode: https://danfaggella.com/tegmark1
Listen to the full podcast episode: https://youtu.be/yQ2fDEQ4Ol0
This episode referred to the following other essays and resources:
-- Max's A.G.I. Framework / "Keep the Future Human" - https://keepthefuturehuman.ai/
-- AI Safety Connect - https://aisafetyconnect.com
...
There three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: danfaggella.com/trajectory
-- X: x.com/danfaggella
-- LinkedIn: linkedin.com/in/danfaggella
-- Newsletter: bit.ly/TrajectoryTw
-- YouTube: https://www.youtube.com/@trajectoryai
22 episodes