Artwork

Content provided by Jay Shah. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jay Shah or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Why are Transformer so effective in Large Language Models like ChatGPT

9:43
 
Share
 

Manage episode 359298739 series 2859018
Content provided by Jay Shah. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jay Shah or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Understanding why and how transformers are so efficient in large language models nowadays such as #chatgpt and more.
Watch the full podcast with Dr. Surbhi Goel here: https://youtu.be/stB0cY_fffo
Find Dr. Goel on social media
Website: https://www.surbhigoel.com/
Linkedin: https://www.linkedin.com/in/surbhi-goel-5455b25a
Twitter: https://twitter.com/surbhigoel_?lang=en
Learning Theory Alliance: https://let-all.com/index.html
About the Host:
Jay is a Ph.D. student at Arizona State University.
Linkedin: https://www.linkedin.com/in/shahjay22/
Twitter: https://twitter.com/jaygshah22
Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.
Stay tuned for upcoming webinars!
***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahml
About the author: https://www.public.asu.edu/~jgshah1/

  continue reading

95 episodes

Artwork
iconShare
 
Manage episode 359298739 series 2859018
Content provided by Jay Shah. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jay Shah or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Understanding why and how transformers are so efficient in large language models nowadays such as #chatgpt and more.
Watch the full podcast with Dr. Surbhi Goel here: https://youtu.be/stB0cY_fffo
Find Dr. Goel on social media
Website: https://www.surbhigoel.com/
Linkedin: https://www.linkedin.com/in/surbhi-goel-5455b25a
Twitter: https://twitter.com/surbhigoel_?lang=en
Learning Theory Alliance: https://let-all.com/index.html
About the Host:
Jay is a Ph.D. student at Arizona State University.
Linkedin: https://www.linkedin.com/in/shahjay22/
Twitter: https://twitter.com/jaygshah22
Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.
Stay tuned for upcoming webinars!
***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahml
About the author: https://www.public.asu.edu/~jgshah1/

  continue reading

95 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play