Artwork

Content provided by Tyrone Grandison. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tyrone Grandison or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 9: Large Language Models

19:47
 
Share
 

Manage episode 468642540 series 3625586
Content provided by Tyrone Grandison. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tyrone Grandison or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This episode serves as an introduction to large language models (LLMs). It covers fundamental concepts, including pre-training methods and generative models, as well as practical aspects. The document addresses instruction fine-tuning, chain of thought (CoT) prompting, and methods for enhancing LLM performance. Furthermore, discuss fine-tuning LLMs with labeled data and reward models for instruction and human preference alignment.
For more insights on AI , continue Navigating the AI Revolution.

  continue reading

10 episodes

Artwork
iconShare
 
Manage episode 468642540 series 3625586
Content provided by Tyrone Grandison. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tyrone Grandison or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This episode serves as an introduction to large language models (LLMs). It covers fundamental concepts, including pre-training methods and generative models, as well as practical aspects. The document addresses instruction fine-tuning, chain of thought (CoT) prompting, and methods for enhancing LLM performance. Furthermore, discuss fine-tuning LLMs with labeled data and reward models for instruction and human preference alignment.
For more insights on AI , continue Navigating the AI Revolution.

  continue reading

10 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play