Artwork

Content provided by Redpoint Ventures and By Redpoint Ventures. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Redpoint Ventures and By Redpoint Ventures or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Ep 65: Co-Authors of AI-2027 Daniel Kokotajlo and Thomas Larsen On Their Detailed AI Predictions for the Coming Years

1:23:27
 
Share
 

Manage episode 482748350 series 3495253
Content provided by Redpoint Ventures and By Redpoint Ventures. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Redpoint Ventures and By Redpoint Ventures or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.

Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.

We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.

(0:00) Intro
(1:15) Overview of AI 2027
(2:32) AI Development Timeline
(4:10) Race and Slowdown Branches
(12:52) US vs China
(18:09) Potential AI Misalignment
(31:06) Getting Serious About the Threat of AI
(47:23) Predictions for AI Development by 2027
(48:33) Public and Government Reactions to AI Concerns
(49:27) Policy Recommendations for AI Safety
(52:22) Diverging Views on AI Alignment Timelines
(1:01:30) The Role of Public Awareness in AI Safety
(1:02:38) Reflections on Insider vs. Outsider Strategies
(1:10:53) Future Research and Scenario Planning
(1:14:01) Best and Worst Case Outcomes for AI
(1:17:02) Final Thoughts and Hopes for the Future

With your co-hosts:

@jacobeffron

- Partner at Redpoint, Former PM Flatiron Health

@patrickachase

- Partner at Redpoint, Former ML Engineer LinkedIn

@ericabrescia

- Former COO Github, Founder Bitnami (acq’d by VMWare)

@jordan_segall

- Partner at Redpoint

  continue reading

69 episodes

Artwork
iconShare
 
Manage episode 482748350 series 3495253
Content provided by Redpoint Ventures and By Redpoint Ventures. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Redpoint Ventures and By Redpoint Ventures or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.

Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.

We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.

(0:00) Intro
(1:15) Overview of AI 2027
(2:32) AI Development Timeline
(4:10) Race and Slowdown Branches
(12:52) US vs China
(18:09) Potential AI Misalignment
(31:06) Getting Serious About the Threat of AI
(47:23) Predictions for AI Development by 2027
(48:33) Public and Government Reactions to AI Concerns
(49:27) Policy Recommendations for AI Safety
(52:22) Diverging Views on AI Alignment Timelines
(1:01:30) The Role of Public Awareness in AI Safety
(1:02:38) Reflections on Insider vs. Outsider Strategies
(1:10:53) Future Research and Scenario Planning
(1:14:01) Best and Worst Case Outcomes for AI
(1:17:02) Final Thoughts and Hopes for the Future

With your co-hosts:

@jacobeffron

- Partner at Redpoint, Former PM Flatiron Health

@patrickachase

- Partner at Redpoint, Former ML Engineer LinkedIn

@ericabrescia

- Former COO Github, Founder Bitnami (acq’d by VMWare)

@jordan_segall

- Partner at Redpoint

  continue reading

69 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play