Artwork

Content provided by Daniel Faggella. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Faggella or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Toby Ord - Crucial Updates on the Evolving AGI Risk Landscape (AGI Governance, Episode 7)

1:24:49
 
Share
 

Manage episode 499860737 series 3610999
Content provided by Daniel Faggella. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Faggella or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Joining us in our seventh episode of our series AGI Governance on The Trajectory is Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice: Existential Risk and the Future of Humanity.
Toby is one of the world’s most influential thinkers on long-term risk - and one of the clearest voices on how advanced AI could shape, or shatter, the trajectory of human civilization.
In this episode, Toby unpacks the evolving technical and economic landscape of AGI - particularly the implications of model deployment, imitation learning, and the limits of current training paradigms. He draws on his unique position as both a moral philosopher and a close observer of recent AI breakthroughs to highlight shifts that could alter the pace and nature of AGI progress.
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/TIz9TpVCFcQ
See the full article from this episode: https://danfaggella.com/ord1
...
There are three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: danfaggella.com/trajectory
-- X: x.com/danfaggella
-- LinkedIn: linkedin.com/in/danfaggella
-- Newsletter: bit.ly/TrajectoryTw
-- YouTube: https://www.youtube.com/@trajectoryai

  continue reading

30 episodes

Artwork
iconShare
 
Manage episode 499860737 series 3610999
Content provided by Daniel Faggella. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Faggella or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Joining us in our seventh episode of our series AGI Governance on The Trajectory is Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice: Existential Risk and the Future of Humanity.
Toby is one of the world’s most influential thinkers on long-term risk - and one of the clearest voices on how advanced AI could shape, or shatter, the trajectory of human civilization.
In this episode, Toby unpacks the evolving technical and economic landscape of AGI - particularly the implications of model deployment, imitation learning, and the limits of current training paradigms. He draws on his unique position as both a moral philosopher and a close observer of recent AI breakthroughs to highlight shifts that could alter the pace and nature of AGI progress.
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/TIz9TpVCFcQ
See the full article from this episode: https://danfaggella.com/ord1
...
There are three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: danfaggella.com/trajectory
-- X: x.com/danfaggella
-- LinkedIn: linkedin.com/in/danfaggella
-- Newsletter: bit.ly/TrajectoryTw
-- YouTube: https://www.youtube.com/@trajectoryai

  continue reading

30 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play