Artwork

Content provided by Machine Learning Street Talk (MLST). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Machine Learning Street Talk (MLST) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

2:07:07
 
Share
 

Manage episode 490458836 series 2803422
Content provided by Machine Learning Street Talk (MLST). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Machine Learning Street Talk (MLST) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What if the most powerful technology in human history is being built by people who openly admit they don't trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotajlo predicts AGI by 2028 based on compute scaling trends. Marcus argues we haven't solved basic cognitive problems from his 2001 research. The stakes? If Kokotajlo is right and Marcus is wrong about safety progress, humanity may have already lost control.

Sponsor messages:

========

Google Gemini: Google Gemini features Veo3, a state-of-the-art AI video generation model in the Gemini app. Sign up at https://gemini.google.com

Tufa AI Labs are hiring for ML Engineers and a Chief Scientist in Zurich/SF. They are top of the ARCv2 leaderboard!

https://tufalabs.ai/

========

Guest Powerhouse

Gary Marcus - Cognitive scientist, author of "Taming Silicon Valley," and AI's most prominent skeptic who's been warning about the same fundamental problems for 25 years (https://garymarcus.substack.com/)

Daniel Kokotajlo - Former OpenAI insider turned whistleblower who reveals the disturbing rationalizations of AI lab leaders in his viral "AI 2027" scenario (https://ai-2027.com/)

Dan Hendrycks - Director of the Center for AI Safety who created the benchmarks used to measure AI progress and argues we have only years, not decades, to prevent catastrophe (https://danhendrycks.com/)

Transcript:

http://app.rescript.info/public/share/tEcx4UkToi-2jwS1cN51CW70A4Eh6QulBRxDILoXOno

TOC:

Introduction: The AI Arms Race

00:00:04 - The Danger of Automated AI R&D

00:00:43 - The Rationalization: "If we don't, someone else will"

00:01:56 - Sponsor Reads (Tufa AI Labs & Google Gemini)

00:02:55 - Guest Introductions

The Philosophical Stakes

00:04:13 - What is the Positive Vision for AGI?

00:07:00 - The Abundance Scenario: Superintelligent Economy

00:09:06 - Differentiating AGI and Superintelligence (ASI)

00:11:41 - Sam Altman: "A Decade in a Month"

00:14:47 - Economic Inequality & The UBI Problem

Policy and Red Lines

00:17:13 - The Pause Letter: Stopping vs. Delaying AI

00:20:03 - Defining Three Concrete Red Lines for AI Development

00:25:24 - Racing Towards Red Lines & The Myth of "Durable Advantage"

00:31:15 - Transparency and Public Perception

00:35:16 - The Rationalization Cascade: Why AI Labs Race to "Win"

Forecasting AGI: Timelines and Methodologies

00:42:29 - The Case for Short Timelines (Median 2028)

00:47:00 - Scaling Limits: Compute, Data, and Money

00:49:36 - Forecasting Models: Bio-Anchors and Agentic Coding

00:53:15 - The 10^45 FLOP Thought Experiment

The Great Debate: Cognitive Gaps vs. Scaling

00:58:41 - Gary Marcus's Counterpoint: The Unsolved Problems of Cognition

01:00:46 - Current AI Can't Play Chess Reliably

01:08:23 - Can Tools and Neurosymbolic AI Fill the Gaps?

01:16:13 - The Multi-Dimensional Nature of Intelligence

01:24:26 - The Benchmark Debate: Data Contamination and Reliability

01:31:15 - The Superhuman Coder Milestone Debate

01:37:45 - The Driverless Car Analogy

The Alignment Problem

01:39:45 - Has Any Progress Been Made on Alignment?

01:42:43 - "Fairly Reasonably Scares the Sh*t Out of Me"

01:46:30 - Distinguishing Model vs. Process Alignment

Scenarios and Conclusions

01:49:26 - Gary's Alternative Scenario: The Neurosymbolic Shift

01:53:35 - Will AI Become Jeff Dean?

01:58:41 - Takeoff Speeds and Exceeding Human Intelligence

02:03:19 - Final Disagreements and Closing Remarks

REFS:

Gary Marcus (2001) - The Algebraic Mind

https://mitpress.mit.edu/9780262632683/the-algebraic-mind/

00:59:00

Gary Marcus & Ernest Davis (2019) - Rebooting AI

https://www.penguinrandomhouse.com/books/566677/rebooting-ai-by-gary-marcus-and-ernest-davis/

01:31:59

Gary Marcus (2024) - Taming SV

https://www.hachettebookgroup.com/titles/gary-marcus/taming-silicon-valley/9781541704091/

00:03:01

  continue reading

221 episodes

Artwork
iconShare
 
Manage episode 490458836 series 2803422
Content provided by Machine Learning Street Talk (MLST). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Machine Learning Street Talk (MLST) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What if the most powerful technology in human history is being built by people who openly admit they don't trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotajlo predicts AGI by 2028 based on compute scaling trends. Marcus argues we haven't solved basic cognitive problems from his 2001 research. The stakes? If Kokotajlo is right and Marcus is wrong about safety progress, humanity may have already lost control.

Sponsor messages:

========

Google Gemini: Google Gemini features Veo3, a state-of-the-art AI video generation model in the Gemini app. Sign up at https://gemini.google.com

Tufa AI Labs are hiring for ML Engineers and a Chief Scientist in Zurich/SF. They are top of the ARCv2 leaderboard!

https://tufalabs.ai/

========

Guest Powerhouse

Gary Marcus - Cognitive scientist, author of "Taming Silicon Valley," and AI's most prominent skeptic who's been warning about the same fundamental problems for 25 years (https://garymarcus.substack.com/)

Daniel Kokotajlo - Former OpenAI insider turned whistleblower who reveals the disturbing rationalizations of AI lab leaders in his viral "AI 2027" scenario (https://ai-2027.com/)

Dan Hendrycks - Director of the Center for AI Safety who created the benchmarks used to measure AI progress and argues we have only years, not decades, to prevent catastrophe (https://danhendrycks.com/)

Transcript:

http://app.rescript.info/public/share/tEcx4UkToi-2jwS1cN51CW70A4Eh6QulBRxDILoXOno

TOC:

Introduction: The AI Arms Race

00:00:04 - The Danger of Automated AI R&D

00:00:43 - The Rationalization: "If we don't, someone else will"

00:01:56 - Sponsor Reads (Tufa AI Labs & Google Gemini)

00:02:55 - Guest Introductions

The Philosophical Stakes

00:04:13 - What is the Positive Vision for AGI?

00:07:00 - The Abundance Scenario: Superintelligent Economy

00:09:06 - Differentiating AGI and Superintelligence (ASI)

00:11:41 - Sam Altman: "A Decade in a Month"

00:14:47 - Economic Inequality & The UBI Problem

Policy and Red Lines

00:17:13 - The Pause Letter: Stopping vs. Delaying AI

00:20:03 - Defining Three Concrete Red Lines for AI Development

00:25:24 - Racing Towards Red Lines & The Myth of "Durable Advantage"

00:31:15 - Transparency and Public Perception

00:35:16 - The Rationalization Cascade: Why AI Labs Race to "Win"

Forecasting AGI: Timelines and Methodologies

00:42:29 - The Case for Short Timelines (Median 2028)

00:47:00 - Scaling Limits: Compute, Data, and Money

00:49:36 - Forecasting Models: Bio-Anchors and Agentic Coding

00:53:15 - The 10^45 FLOP Thought Experiment

The Great Debate: Cognitive Gaps vs. Scaling

00:58:41 - Gary Marcus's Counterpoint: The Unsolved Problems of Cognition

01:00:46 - Current AI Can't Play Chess Reliably

01:08:23 - Can Tools and Neurosymbolic AI Fill the Gaps?

01:16:13 - The Multi-Dimensional Nature of Intelligence

01:24:26 - The Benchmark Debate: Data Contamination and Reliability

01:31:15 - The Superhuman Coder Milestone Debate

01:37:45 - The Driverless Car Analogy

The Alignment Problem

01:39:45 - Has Any Progress Been Made on Alignment?

01:42:43 - "Fairly Reasonably Scares the Sh*t Out of Me"

01:46:30 - Distinguishing Model vs. Process Alignment

Scenarios and Conclusions

01:49:26 - Gary's Alternative Scenario: The Neurosymbolic Shift

01:53:35 - Will AI Become Jeff Dean?

01:58:41 - Takeoff Speeds and Exceeding Human Intelligence

02:03:19 - Final Disagreements and Closing Remarks

REFS:

Gary Marcus (2001) - The Algebraic Mind

https://mitpress.mit.edu/9780262632683/the-algebraic-mind/

00:59:00

Gary Marcus & Ernest Davis (2019) - Rebooting AI

https://www.penguinrandomhouse.com/books/566677/rebooting-ai-by-gary-marcus-and-ernest-davis/

01:31:59

Gary Marcus (2024) - Taming SV

https://www.hachettebookgroup.com/titles/gary-marcus/taming-silicon-valley/9781541704091/

00:03:01

  continue reading

221 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play