Artwork

Content provided by Daniel Faggella. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Faggella or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]

1:14:45
 
Share
 

Manage episode 462977098 series 3610999
Content provided by Daniel Faggella. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Faggella or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This is an interview with Eliezer Yudkowsky, AI Researcher at the Machine Intelligence Research Institute.
This is the sixth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence.
Watch this episode on The Trajectory Youtube Channel: https://www.youtube.com/watch?v=YlsvQO0zDiE
See the full article from this episode: https://danfaggella.com/yudkowsky1
...
There are four main questions we cover in this AGI Governance series are:
1. How important is AGI governance now on a 1-10 scale?
2. What should AGI governance attempt to do?
3. What might AGI governance look like in practice?
4. What should innovators and regulators do now?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: https://danfaggella.com/trajectory
-- X: https://x.com/danfaggella
-- LinkedIn: https://linkedin.com/in/danfaggella
-- Newsletter: https://bit.ly/TrajectoryTw
-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954

  continue reading

22 episodes

Artwork
iconShare
 
Manage episode 462977098 series 3610999
Content provided by Daniel Faggella. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Faggella or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This is an interview with Eliezer Yudkowsky, AI Researcher at the Machine Intelligence Research Institute.
This is the sixth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence.
Watch this episode on The Trajectory Youtube Channel: https://www.youtube.com/watch?v=YlsvQO0zDiE
See the full article from this episode: https://danfaggella.com/yudkowsky1
...
There are four main questions we cover in this AGI Governance series are:
1. How important is AGI governance now on a 1-10 scale?
2. What should AGI governance attempt to do?
3. What might AGI governance look like in practice?
4. What should innovators and regulators do now?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: https://danfaggella.com/trajectory
-- X: https://x.com/danfaggella
-- LinkedIn: https://linkedin.com/in/danfaggella
-- Newsletter: https://bit.ly/TrajectoryTw
-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954

  continue reading

22 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play