Artwork

Content provided by Nicolay Gerold. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nicolay Gerold or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#021 The Problems You Will Encounter With RAG At Scale And How To Prevent (or fix) Them

50:09
 
Share
 

Manage episode 439548432 series 3585930
Content provided by Nicolay Gerold. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nicolay Gerold or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Hey! Welcome back.

Today we look at how we can get our RAG system ready for scale.

We discuss common problems and their solutions, when you introduce more users and more requests to your system.

For this we are joined by Nirant Kasliwal, the author of fastembed.

Nirant shares practical insights on metadata extraction, evaluation strategies, and emerging technologies like Colipali. This episode is a must-listen for anyone looking to level up their RAG implementations.

"Naive RAG has a lot of problems on the retrieval end and then there's a lot of problems on how LLMs look at these data points as well."

"The first 30 to 50% of gains are relatively quick. The rest 50% takes forever."

"You do not want to give the same answer about company's history to the co-founding CEO and the intern who has just joined."

"Embedding similarity is the signal on which you want to build your entire search is just not quite complete."

Key insights:

  • Naive RAG often fails due to limitations of embeddings and LLMs' sensitivity to input ordering.
  • Query profiling and expansion:
    • Use clustering and tools like latent Scope to identify problematic query types
    • Expand queries offline and use parallel searches for better results
  • Metadata extraction:
    • Extract temporal, entity, and other relevant information from queries
    • Use LLMs for extraction, with checks against libraries like Stanford NLP
  • User personalization:
    • Include user role, access privileges, and conversation history
    • Adapt responses based on user expertise and readability scores
  • Evaluation and improvement:
    • Create synthetic datasets and use real user feedback
    • Employ tools like DSPY for prompt engineering
  • Advanced techniques:
    • Query routing based on type and urgency
    • Use smaller models (1-3B parameters) for easier iteration and error spotting
    • Implement error handling and cross-validation for extracted metadata

Nirant Kasliwal:

Nicolay Gerold:

query understanding, AI-powered search, Lambda Mart, e-commerce ranking, networking, experts, recommendation, search

  continue reading

57 episodes

Artwork
iconShare
 
Manage episode 439548432 series 3585930
Content provided by Nicolay Gerold. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nicolay Gerold or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Hey! Welcome back.

Today we look at how we can get our RAG system ready for scale.

We discuss common problems and their solutions, when you introduce more users and more requests to your system.

For this we are joined by Nirant Kasliwal, the author of fastembed.

Nirant shares practical insights on metadata extraction, evaluation strategies, and emerging technologies like Colipali. This episode is a must-listen for anyone looking to level up their RAG implementations.

"Naive RAG has a lot of problems on the retrieval end and then there's a lot of problems on how LLMs look at these data points as well."

"The first 30 to 50% of gains are relatively quick. The rest 50% takes forever."

"You do not want to give the same answer about company's history to the co-founding CEO and the intern who has just joined."

"Embedding similarity is the signal on which you want to build your entire search is just not quite complete."

Key insights:

  • Naive RAG often fails due to limitations of embeddings and LLMs' sensitivity to input ordering.
  • Query profiling and expansion:
    • Use clustering and tools like latent Scope to identify problematic query types
    • Expand queries offline and use parallel searches for better results
  • Metadata extraction:
    • Extract temporal, entity, and other relevant information from queries
    • Use LLMs for extraction, with checks against libraries like Stanford NLP
  • User personalization:
    • Include user role, access privileges, and conversation history
    • Adapt responses based on user expertise and readability scores
  • Evaluation and improvement:
    • Create synthetic datasets and use real user feedback
    • Employ tools like DSPY for prompt engineering
  • Advanced techniques:
    • Query routing based on type and urgency
    • Use smaller models (1-3B parameters) for easier iteration and error spotting
    • Implement error handling and cross-validation for extracted metadata

Nirant Kasliwal:

Nicolay Gerold:

query understanding, AI-powered search, Lambda Mart, e-commerce ranking, networking, experts, recommendation, search

  continue reading

57 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play