Artwork

Content provided by Nicolay Gerold. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nicolay Gerold or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#022 The Limits of Embeddings, Out-of-Domain Data, Long Context, Finetuning (and How We're Fixing It)

46:05
 
Share
 

Manage episode 440740457 series 3585930
Content provided by Nicolay Gerold. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nicolay Gerold or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Text embeddings have limitations when it comes to handling long documents and out-of-domain data.

Today, we are talking to Nils Reimers. He is one of the researchers who kickstarted the field of dense embeddings, developed sentence transformers, started HuggingFace’s Neural Search team and now leads the development of search foundational models at Cohere. Tbh, he has too many accolades to count off here.

We talk about the main limitations of embeddings:

  • Failing out of domain
  • Struggling with long documents
  • Very hard to debug
  • Hard to find formalize what actually is similar

Are you still not sure whether to listen? Here are some teasers:

  • Interpreting embeddings can be challenging, and current models are not easily explainable.
  • Fine-tuning is necessary to adapt embeddings to specific domains, but it requires careful consideration of the data and objectives.
  • Re-ranking is an effective approach to handle long documents and incorporate additional factors like recency and trustworthiness.
  • The future of embeddings lies in addressing scalability issues and exploring new research directions.

Nils Reimers:

Nicolay Gerold:

text embeddings, limitations, long documents, interpretation, fine-tuning, re-ranking, future research

00:00 Introduction and Guest Introduction 00:43 Early Work with BERT and Argument Mining 02:24 Evolution and Innovations in Embeddings 03:39 Constructive Learning and Hard Negatives 05:17 Training and Fine-Tuning Embedding Models 12:48 Challenges and Limitations of Embeddings 18:16 Adapting Embeddings to New Domains 22:41 Handling Long Documents and Re-Ranking 31:08 Combining Embeddings with Traditional ML 45:16 Conclusion and Upcoming Episodes

  continue reading

59 episodes

Artwork
iconShare
 
Manage episode 440740457 series 3585930
Content provided by Nicolay Gerold. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nicolay Gerold or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Text embeddings have limitations when it comes to handling long documents and out-of-domain data.

Today, we are talking to Nils Reimers. He is one of the researchers who kickstarted the field of dense embeddings, developed sentence transformers, started HuggingFace’s Neural Search team and now leads the development of search foundational models at Cohere. Tbh, he has too many accolades to count off here.

We talk about the main limitations of embeddings:

  • Failing out of domain
  • Struggling with long documents
  • Very hard to debug
  • Hard to find formalize what actually is similar

Are you still not sure whether to listen? Here are some teasers:

  • Interpreting embeddings can be challenging, and current models are not easily explainable.
  • Fine-tuning is necessary to adapt embeddings to specific domains, but it requires careful consideration of the data and objectives.
  • Re-ranking is an effective approach to handle long documents and incorporate additional factors like recency and trustworthiness.
  • The future of embeddings lies in addressing scalability issues and exploring new research directions.

Nils Reimers:

Nicolay Gerold:

text embeddings, limitations, long documents, interpretation, fine-tuning, re-ranking, future research

00:00 Introduction and Guest Introduction 00:43 Early Work with BERT and Argument Mining 02:24 Evolution and Innovations in Embeddings 03:39 Constructive Learning and Hard Negatives 05:17 Training and Fine-Tuning Embedding Models 12:48 Challenges and Limitations of Embeddings 18:16 Adapting Embeddings to New Domains 22:41 Handling Long Documents and Re-Ranking 31:08 Combining Embeddings with Traditional ML 45:16 Conclusion and Upcoming Episodes

  continue reading

59 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play