Go offline with the Player FM app!
Ep 65: Co-Authors of AI-2027 Daniel Kokotajlo and Thomas Larsen On Their Detailed AI Predictions for the Coming Years
Manage episode 482748350 series 3495253
The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.
Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.
We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.
(0:00) Intro
(1:15) Overview of AI 2027
(2:32) AI Development Timeline
(4:10) Race and Slowdown Branches
(12:52) US vs China
(18:09) Potential AI Misalignment
(31:06) Getting Serious About the Threat of AI
(47:23) Predictions for AI Development by 2027
(48:33) Public and Government Reactions to AI Concerns
(49:27) Policy Recommendations for AI Safety
(52:22) Diverging Views on AI Alignment Timelines
(1:01:30) The Role of Public Awareness in AI Safety
(1:02:38) Reflections on Insider vs. Outsider Strategies
(1:10:53) Future Research and Scenario Planning
(1:14:01) Best and Worst Case Outcomes for AI
(1:17:02) Final Thoughts and Hopes for the Future
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
69 episodes
Manage episode 482748350 series 3495253
The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.
Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.
We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.
(0:00) Intro
(1:15) Overview of AI 2027
(2:32) AI Development Timeline
(4:10) Race and Slowdown Branches
(12:52) US vs China
(18:09) Potential AI Misalignment
(31:06) Getting Serious About the Threat of AI
(47:23) Predictions for AI Development by 2027
(48:33) Public and Government Reactions to AI Concerns
(49:27) Policy Recommendations for AI Safety
(52:22) Diverging Views on AI Alignment Timelines
(1:01:30) The Role of Public Awareness in AI Safety
(1:02:38) Reflections on Insider vs. Outsider Strategies
(1:10:53) Future Research and Scenario Planning
(1:14:01) Best and Worst Case Outcomes for AI
(1:17:02) Final Thoughts and Hopes for the Future
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
69 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.