Artwork

Content provided by Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AGI Beyond the Buzz: What Is It, and Are We Ready?

52:53
 
Share
 

Manage episode 479881803 series 2503772
Content provided by Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What does it really mean to ‘feel the AGI?’ Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous.

In this episode, Aza Raskin and Randy Fernando dive deep into what ‘feeling the AGI’ really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies.

As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety?

Join Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ and subscribe to our Substack.

RECOMMENDED MEDIA

Daniel Kokotajlo et al’s “AI 2027” paper
A demo of Omni Human One, referenced by Randy
A paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it’s values
A paper from Palisades Research that found an AI would cheat in order to win
The treaty that banned blinding laser weapons
Further reading on the moratorium on germline editing

RECOMMENDED YUA EPISODES
The Self-Preserving Machine: Why AI Learns to Deceive

Behind the DeepSeek Hype, AI is Learning to Reason

The Tech-God Complex: Why We Need to be Skeptics

This Moment in AI: How We Got Here and Where We’re Going

How to Think About AI Consciousness with Anil Seth

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Clarification: When Randy referenced a “$110 trillion game” as the target for AI companies, he was referring to the entire global economy.

  continue reading

136 episodes

Artwork
iconShare
 
Manage episode 479881803 series 2503772
Content provided by Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What does it really mean to ‘feel the AGI?’ Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous.

In this episode, Aza Raskin and Randy Fernando dive deep into what ‘feeling the AGI’ really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies.

As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety?

Join Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ and subscribe to our Substack.

RECOMMENDED MEDIA

Daniel Kokotajlo et al’s “AI 2027” paper
A demo of Omni Human One, referenced by Randy
A paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it’s values
A paper from Palisades Research that found an AI would cheat in order to win
The treaty that banned blinding laser weapons
Further reading on the moratorium on germline editing

RECOMMENDED YUA EPISODES
The Self-Preserving Machine: Why AI Learns to Deceive

Behind the DeepSeek Hype, AI is Learning to Reason

The Tech-God Complex: Why We Need to be Skeptics

This Moment in AI: How We Got Here and Where We’re Going

How to Think About AI Consciousness with Anil Seth

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Clarification: When Randy referenced a “$110 trillion game” as the target for AI companies, he was referring to the entire global economy.

  continue reading

136 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play