Artwork

Content provided by Foresight Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Foresight Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Irina Rish | AI & Scale

12:18
 
Share
 

Manage episode 488113157 series 2943147
Content provided by Foresight Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Foresight Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

How has the history of AI been shaped by the "bitter lesson" that simple scaling beats complex algorithms, and what comes next? In this talk, Irina Rish traces AI's evolution from rule-based systems to today's foundation models, exploring how scaling laws predicted performance improvements and recent shifts toward more efficient approaches. She covers the progression from GPT scaling laws to Chinchilla's compute-optimal training, the rise of inference-time computation with models like OpenAI's o1, and why we might need to move beyond transformers to truly brain-inspired dynamical systems.


Irina Rish is a professor at the University of Montreal and Mila Quebec AI Institute. She also co-founded a startup focused on developing more efficient foundation models and recently released a suite of open-source compressed models.


This talk was recorded at Vision Weekend Puerto Rico 2025. To see the slides and more talks from the event, please visit our YouTube channel.



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

193 episodes

Artwork

Irina Rish | AI & Scale

Foresight Institute Radio

16 subscribers

published

iconShare
 
Manage episode 488113157 series 2943147
Content provided by Foresight Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Foresight Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

How has the history of AI been shaped by the "bitter lesson" that simple scaling beats complex algorithms, and what comes next? In this talk, Irina Rish traces AI's evolution from rule-based systems to today's foundation models, exploring how scaling laws predicted performance improvements and recent shifts toward more efficient approaches. She covers the progression from GPT scaling laws to Chinchilla's compute-optimal training, the rise of inference-time computation with models like OpenAI's o1, and why we might need to move beyond transformers to truly brain-inspired dynamical systems.


Irina Rish is a professor at the University of Montreal and Mila Quebec AI Institute. She also co-founded a startup focused on developing more efficient foundation models and recently released a suite of open-source compressed models.


This talk was recorded at Vision Weekend Puerto Rico 2025. To see the slides and more talks from the event, please visit our YouTube channel.



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

193 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play