1,752 subscribers
Go offline with the Player FM app!
Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723
Manage episode 471860280 series 2355587
Today, we're joined by Jonas Geiping, research group leader at Ellis Institute and the Max Planck Institute for Intelligent Systems to discuss his recent paper, “Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach.” This paper proposes a novel language model architecture which uses recurrent depth to enable “thinking in latent space.” We dig into “internal reasoning” versus “verbalized reasoning”—analogous to non-verbalized and verbalized thinking in humans, and discuss how the model searches in latent space to predict the next token and dynamically allocates more compute based on token difficulty. We also explore how the recurrent depth architecture simplifies LLMs, the parallels to diffusion models, the model's performance on reasoning tasks, the challenges of comparing models with varying compute budgets, and architectural advantages such as zero-shot adaptive exits and natural speculative decoding.
The complete show notes for this episode can be found at https://twimlai.com/go/723.
747 episodes
Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Manage episode 471860280 series 2355587
Today, we're joined by Jonas Geiping, research group leader at Ellis Institute and the Max Planck Institute for Intelligent Systems to discuss his recent paper, “Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach.” This paper proposes a novel language model architecture which uses recurrent depth to enable “thinking in latent space.” We dig into “internal reasoning” versus “verbalized reasoning”—analogous to non-verbalized and verbalized thinking in humans, and discuss how the model searches in latent space to predict the next token and dynamically allocates more compute based on token difficulty. We also explore how the recurrent depth architecture simplifies LLMs, the parallels to diffusion models, the model's performance on reasoning tasks, the challenges of comparing models with varying compute budgets, and architectural advantages such as zero-shot adaptive exits and natural speculative decoding.
The complete show notes for this episode can be found at https://twimlai.com/go/723.
747 episodes
All episodes
×

1 Generative Benchmarking with Kelly Hong - #728 54:17


1 Exploring the Biology of LLMs with Circuit Tracing with Emmanuel Ameisen - #727 1:34:06


1 Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726 51:45


1 Waymo's Foundation Model for Autonomous Driving with Drago Anguelov - #725 1:09:07


1 Dynamic Token Merging for Efficient Byte-level Language Models with Julie Kallini - #724 50:32


1 Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723 58:38


1 Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722 42:11


1 Inside s1: An o1-Style Reasoning Model That Cost Under $50 to Train with Niklas Muennighoff - #721 49:29


1 Accelerating AI Training and Inference with AWS Trainium2 with Ron Diamant - #720 1:07:05


1 π0: A Foundation Model for Robotics with Sergey Levine - #719 52:30


1 AI Trends 2025: AI Agents and Multi-Agent Systems with Victor Dibia - #718 1:44:59


1 Speculative Decoding and Efficient LLM Inference with Chris Lott - #717 1:16:30


1 Ensuring Privacy for Any LLM with Patricia Thaine - #716 51:33


1 AI Engineering Pitfalls with Chip Huyen - #715 57:37


1 Evolving MLOps Platforms for Generative AI and Agents with Abhijit Bose - #714 58:08
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.