Fixing LLM Hallucinations with Facts
MP3•Episode home
Manage episode 441355815 series 3601172
Content provided by Yogendra Miraje. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Yogendra Miraje or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
This episode explores how Google researchers are tackling the issue of "hallucinations" in Large Language Models (LLMs) by connecting them to Data Commons, a vast repository of publicly available statistical data.https://datacommons.org/The researchers experiment with two techniques: Retrieval Interleaved Generation (RIG), where the LLM is trained to generate natural language queries to fetch data from Data Commons and Retrieval Augmented Generation (RAG), where relevant data tables from Data...
…
continue reading
15 episodes