Go offline with the Player FM app!
#030 Vector Search at Scale, Why One Size Doesn't Fit All
Manage episode 448926657 series 3585930
Ever wondered why your vector search becomes painfully slow after scaling past a million vectors? You're not alone - even tech giants struggle with this.
Charles Xie, founder of Zilliz (company behind Milvus), shares how they solved vector database scaling challenges at 100B+ vector scale:
Key Insights:
- Multi-tier storage strategy:
- GPU memory (1% of data, fastest)
- RAM (10% of data)
- Local SSD
- Object storage (slowest but cheapest)
- Real-time search solution:
- New data goes to buffer (searchable immediately)
- Index builds in background when buffer fills
- Combines buffer & main index results
- Performance optimization:
- GPU acceleration for 10k-50k queries/second
- Customizable trade-offs between:
- Cost
- Latency
- Search relevance
- Future developments:
- Self-learning indices
- Hybrid search methods (dense + sparse)
- Graph embedding support
- Colbert integration
Perfect for teams hitting scaling walls with their current vector search implementation or planning for future growth.
Worth watching if you're building production search systems or need to optimize costs vs performance.
Charles Xie:
Nicolay Gerold:
00:00 Introduction to Search System Challenges 00:26 Introducing Milvus: The Open Source Vector Database 00:58 Interview with Charles: Founder of Zilliz 02:20 Scalability and Performance in Vector Databases 03:35 Challenges in Distributed Systems 05:46 Data Consistency and Real-Time Search 12:12 Hierarchical Storage and GPU Acceleration 18:34 Emerging Technologies in Vector Search 23:21 Self-Learning Indexes and Future Innovations 28:44 Key Takeaways and Conclusion
53 episodes
Manage episode 448926657 series 3585930
Ever wondered why your vector search becomes painfully slow after scaling past a million vectors? You're not alone - even tech giants struggle with this.
Charles Xie, founder of Zilliz (company behind Milvus), shares how they solved vector database scaling challenges at 100B+ vector scale:
Key Insights:
- Multi-tier storage strategy:
- GPU memory (1% of data, fastest)
- RAM (10% of data)
- Local SSD
- Object storage (slowest but cheapest)
- Real-time search solution:
- New data goes to buffer (searchable immediately)
- Index builds in background when buffer fills
- Combines buffer & main index results
- Performance optimization:
- GPU acceleration for 10k-50k queries/second
- Customizable trade-offs between:
- Cost
- Latency
- Search relevance
- Future developments:
- Self-learning indices
- Hybrid search methods (dense + sparse)
- Graph embedding support
- Colbert integration
Perfect for teams hitting scaling walls with their current vector search implementation or planning for future growth.
Worth watching if you're building production search systems or need to optimize costs vs performance.
Charles Xie:
Nicolay Gerold:
00:00 Introduction to Search System Challenges 00:26 Introducing Milvus: The Open Source Vector Database 00:58 Interview with Charles: Founder of Zilliz 02:20 Scalability and Performance in Vector Databases 03:35 Challenges in Distributed Systems 05:46 Data Consistency and Real-Time Search 12:12 Hierarchical Storage and GPU Acceleration 18:34 Emerging Technologies in Vector Search 23:21 Self-Learning Indexes and Future Innovations 28:44 Key Takeaways and Conclusion
53 episodes
All episodes
×
1 #048 TAKEAWAYS Why Your AI Agents Need Permission to Act, Not Just Read 7:07

1 #048 Why Your AI Agents Need Permission to Act, Not Just Read 57:03

1 #047 Architecting Information for Search, Humans, and Artificial Intelligence 57:22

1 #046 Building a Search Database From First Principles 53:29

1 #045 RAG As Two Things - Prompt Engineering and Search 1:02:44

1 #044 Graphs Aren't Just For Specialists Anymore 1:03:35

1 #043 Knowledge Graphs Won't Fix Bad Data 1:10:59

1 #042 Temporal RAG, Embracing Time for Smarter, Reliable Knowledge Graphs 1:33:44

1 #041 Context Engineering, How Knowledge Graphs Help LLMs Reason 1:33:35

1 #040 Vector Database Quantization, Product, Binary, and Scalar 52:12

1 #039 Local-First Search, How to Push Search To End-Devices 53:09

1 #038 AI-Powered Search, Context Is King, But Your RAG System Ignores Two-Thirds of It 1:14:24

1 #037 Chunking for RAG: Stop Breaking Your Documents Into Meaningless Pieces 49:13

1 #036 How AI Can Start Teaching Itself - Synthetic Data Deep Dive 48:11

1 #035 A Search System That Learns As You Use It (Agentic RAG) 45:30
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.