Artwork

Content provided by Francesco Gadaleta. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Francesco Gadaleta or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LLMs: Totally Not Making Stuff Up (they promise) (Ep. 263)

28:06
 
Share
 

Manage episode 441795384 series 2600992
Content provided by Francesco Gadaleta. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Francesco Gadaleta or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?
Let’s separate the genius from the guesswork in this insightful breakdown of AI’s creativity problem.

TL;DR;

LLM Generalisation without hallucinations. Is that possible?

References

https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf

https://www.lamini.ai/blog/lamini-memory-tuning

  continue reading

287 episodes

Artwork
iconShare
 
Manage episode 441795384 series 2600992
Content provided by Francesco Gadaleta. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Francesco Gadaleta or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?
Let’s separate the genius from the guesswork in this insightful breakdown of AI’s creativity problem.

TL;DR;

LLM Generalisation without hallucinations. Is that possible?

References

https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf

https://www.lamini.ai/blog/lamini-memory-tuning

  continue reading

287 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play