Artwork

Content provided by Carl Franklin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carl Franklin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Measuring LLMs with Jodie Burchell

1:00:44
 
Share
 

Manage episode 474868171 series 65612
Content provided by Carl Franklin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carl Franklin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
How do you measure the quality of a large language model? Carl and Richard talk to Dr. Jodie Burchell about her work measuring large language models for accuracy, reliability, and consistency. Jodie talks about the variety of benchmarks that exist for LLMs and the problems they have. A broader conversation about quality digs into the idea that LLMs should be targeted to the particular topic area they are being used for - often, smaller is better! Building a good test suite for your LLM is challenging but can increase your confidence that the tool will work as expected.
  continue reading

1087 episodes

Artwork

Measuring LLMs with Jodie Burchell

.NET Rocks!

107 subscribers

published

iconShare
 
Manage episode 474868171 series 65612
Content provided by Carl Franklin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carl Franklin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
How do you measure the quality of a large language model? Carl and Richard talk to Dr. Jodie Burchell about her work measuring large language models for accuracy, reliability, and consistency. Jodie talks about the variety of benchmarks that exist for LLMs and the problems they have. A broader conversation about quality digs into the idea that LLMs should be targeted to the particular topic area they are being used for - often, smaller is better! Building a good test suite for your LLM is challenging but can increase your confidence that the tool will work as expected.
  continue reading

1087 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play