Artwork

Content provided by Conviction. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Conviction or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Asimov: Building An Omniscient RL Oracle with ReflectionAI’s Misha Laskin

1:02:54
 
Share
 

Manage episode 494936280 series 3444082
Content provided by Conviction. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Conviction or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Superintelligence, at least in an academic sense, has already been achieved. But Misha Laskin thinks that the next step towards artificial superintelligence, or ASI, should look both more user and problem-focused. ReflectionAI co-founder and CEO Misha Laskin joins Sarah Guo to introduce Asimov, their new code comprehension agent built on reinforcement learning (RL). Misha talks about creating tools and designing AI agents based on customer needs, and how that influences eval development and the scope of the agent’s memory. The two also discuss the challenges in solving scaling for RL, the future of ASI, and the implications for Google’s “non-acquisition” of Windsurf.

Sign up for new podcasts every week. Email feedback to [email protected]

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @MishaLaskin | @reflection_ai

Chapters:

00:00 – Misha Laskin Introduction

00:44 – Superintelligence vs. Super Intelligent Autonomous Systems

03:26 – Misha’s Journey from Physics to AI

07:48 – Asimov Product Release

11:52 – What Differentiates Asimov from Other Agents

16:15 – Asimov’s Eval Philosophy

21:52 – The Types of Queries Where Asimov Shines

24:35 – Designing a Team-Wide Memory for Asimov

28:38 – Leveraging Pre-Trained Models

32:47 – The Challenges of Solving Scaling in RL

37:21 – Training Agents in Copycat Software Environments

38:25 – When Will We See ASI?

44:27 – Thoughts on Windsurf’s Non-Acquisition

48:10 – Exploring Non-RL Datasets

55:12 – Tackling Problems Beyond Engineering and Coding

57:54 – Where We’re At in Deploying ASI in Different Fields

01:02:30 – Conclusion

  continue reading

124 episodes

Artwork
iconShare
 
Manage episode 494936280 series 3444082
Content provided by Conviction. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Conviction or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Superintelligence, at least in an academic sense, has already been achieved. But Misha Laskin thinks that the next step towards artificial superintelligence, or ASI, should look both more user and problem-focused. ReflectionAI co-founder and CEO Misha Laskin joins Sarah Guo to introduce Asimov, their new code comprehension agent built on reinforcement learning (RL). Misha talks about creating tools and designing AI agents based on customer needs, and how that influences eval development and the scope of the agent’s memory. The two also discuss the challenges in solving scaling for RL, the future of ASI, and the implications for Google’s “non-acquisition” of Windsurf.

Sign up for new podcasts every week. Email feedback to [email protected]

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @MishaLaskin | @reflection_ai

Chapters:

00:00 – Misha Laskin Introduction

00:44 – Superintelligence vs. Super Intelligent Autonomous Systems

03:26 – Misha’s Journey from Physics to AI

07:48 – Asimov Product Release

11:52 – What Differentiates Asimov from Other Agents

16:15 – Asimov’s Eval Philosophy

21:52 – The Types of Queries Where Asimov Shines

24:35 – Designing a Team-Wide Memory for Asimov

28:38 – Leveraging Pre-Trained Models

32:47 – The Challenges of Solving Scaling in RL

37:21 – Training Agents in Copycat Software Environments

38:25 – When Will We See ASI?

44:27 – Thoughts on Windsurf’s Non-Acquisition

48:10 – Exploring Non-RL Datasets

55:12 – Tackling Problems Beyond Engineering and Coding

57:54 – Where We’re At in Deploying ASI in Different Fields

01:02:30 – Conclusion

  continue reading

124 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play