Artwork

Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Accurate KV Cache Quantization with Outlier Tokens Tracing

25:11
 
Share
 

Manage episode 486845178 series 3448051
Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Join us as we discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance.
Paper: https://arxiv.org/abs/2505.10938
Slides: https://bit.ly/45wolpr
Join us for Arize Observe: https://arize.com/observe-2025/

Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

  continue reading

50 episodes

Artwork
iconShare
 
Manage episode 486845178 series 3448051
Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Join us as we discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance.
Paper: https://arxiv.org/abs/2505.10938
Slides: https://bit.ly/45wolpr
Join us for Arize Observe: https://arize.com/observe-2025/

Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

  continue reading

50 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play