Go offline with the Player FM app!
Sleep-time Compute: Beyond Inference Scaling at Test-time
Manage episode 480276618 series 3448051
What if your LLM could think ahead—preparing answers before questions are even asked?
In this week's paper read, we dive into a groundbreaking new paper from researchers at Letta, introducing sleep-time compute: a novel technique that lets models do their heavy lifting offline, well before the user query arrives. By predicting likely questions and precomputing key reasoning steps, sleep-time compute dramatically reduces test-time latency and cost—without sacrificing performance.
We explore new benchmarks—Stateful GSM-Symbolic, Stateful AIME, and the multi-query extension of GSM—that show up to 5x lower compute at inference, 2.5x lower cost per query, and up to 18% higher accuracy when scaled.
You’ll also see how this method applies to realistic agent use cases and what makes it most effective.If you care about LLM efficiency, scalability, or cutting-edge research.
Explore more AI research, or sign up to hear the next session live: arize.com/ai-research-papers
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
48 episodes
Manage episode 480276618 series 3448051
What if your LLM could think ahead—preparing answers before questions are even asked?
In this week's paper read, we dive into a groundbreaking new paper from researchers at Letta, introducing sleep-time compute: a novel technique that lets models do their heavy lifting offline, well before the user query arrives. By predicting likely questions and precomputing key reasoning steps, sleep-time compute dramatically reduces test-time latency and cost—without sacrificing performance.
We explore new benchmarks—Stateful GSM-Symbolic, Stateful AIME, and the multi-query extension of GSM—that show up to 5x lower compute at inference, 2.5x lower cost per query, and up to 18% higher accuracy when scaled.
You’ll also see how this method applies to realistic agent use cases and what makes it most effective.If you care about LLM efficiency, scalability, or cutting-edge research.
Explore more AI research, or sign up to hear the next session live: arize.com/ai-research-papers
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
48 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.