Artwork

Content provided by Alation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Alation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LLMs Decoded: A Starter's Guide to AI with Raza Habib, co-founder & CEO of Humanloop

48:14
 
Share
 

Manage episode 465076263 series 3605591
Content provided by Alation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Alation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

As AI becomes integral to every aspect of business, ensuring its accessibility for everyone—not just specialists—is essential. Companies like Humanloop are leading the charge with innovative platforms that empower non-technical users to harness the power of advanced language models through intuitive tools and frameworks.

Democratizing AI access paves the way for transformative business outcomes and a future of collaborative AI systems. However, building a strong AI strategy starts with leveraging powerful models and mastering prompt engineering before considering fine-tuning. Engaging subject matter experts and using robust evaluation and collaboration tools are equally critical to the success of modern AI projects. In this episode, Satyen and Raza examine the evolution of AI models, the practical challenges of model evaluation and prompt engineering, and the role of multidisciplinary teams in AI development.

*Satyen’s narration was created using AI

--------

“ In our experience, fine-tuning is very useful as an optimization step. But, it's not where we recommend people to start. When people are trying to customize these models, we encourage them as much as possible to push the limits of prompt engineering with the most powerful model they can before they consider fine-tuning. The reason that we suggest that is that it's much faster to change a prompt and see what the impact is. It's often sufficient to customize the models and it's less destructive. If you fine-tune a model and you want to update it later, you kind of have to start from scratch. You have to go back to the base model with your label data set and re fine-tune from the beginning. If you're customizing the model via prompts and you want to make a change, you just go change the text and you can see the difference. There's a much faster iteration cycle and you can get most of the benefit.” – Raza Habib

--------

Time Stamps

*(01:26): Raza’s career journey: From academia to industry

*(12:46): What is active learning?

*(17:20): How LLMs diverge from traditional software processes

*(24:53): What is data leakage?

*(35:56): How can software engineers adapt in the age of AI?

*(47:04): Satyen’s takeaways

--------

Sponsor

This podcast is presented by Alation.

Learn more:

* Subscribe to the newsletter: https://www.alation.com/podcast/

* Alation’s LinkedIn Profile: https://www.linkedin.com/company/alation/

* Satyen’s LinkedIn Profile:

https://www.linkedin.com/in/ssangani/

--------

Links

Connect with Raza on LinkedIn

Learn more about Humanloop

Listen to Raza’s podcast

Order Information Theory, Inference and Learning Algorithms by David MacKay

  continue reading

72 episodes

Artwork
iconShare
 
Manage episode 465076263 series 3605591
Content provided by Alation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Alation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

As AI becomes integral to every aspect of business, ensuring its accessibility for everyone—not just specialists—is essential. Companies like Humanloop are leading the charge with innovative platforms that empower non-technical users to harness the power of advanced language models through intuitive tools and frameworks.

Democratizing AI access paves the way for transformative business outcomes and a future of collaborative AI systems. However, building a strong AI strategy starts with leveraging powerful models and mastering prompt engineering before considering fine-tuning. Engaging subject matter experts and using robust evaluation and collaboration tools are equally critical to the success of modern AI projects. In this episode, Satyen and Raza examine the evolution of AI models, the practical challenges of model evaluation and prompt engineering, and the role of multidisciplinary teams in AI development.

*Satyen’s narration was created using AI

--------

“ In our experience, fine-tuning is very useful as an optimization step. But, it's not where we recommend people to start. When people are trying to customize these models, we encourage them as much as possible to push the limits of prompt engineering with the most powerful model they can before they consider fine-tuning. The reason that we suggest that is that it's much faster to change a prompt and see what the impact is. It's often sufficient to customize the models and it's less destructive. If you fine-tune a model and you want to update it later, you kind of have to start from scratch. You have to go back to the base model with your label data set and re fine-tune from the beginning. If you're customizing the model via prompts and you want to make a change, you just go change the text and you can see the difference. There's a much faster iteration cycle and you can get most of the benefit.” – Raza Habib

--------

Time Stamps

*(01:26): Raza’s career journey: From academia to industry

*(12:46): What is active learning?

*(17:20): How LLMs diverge from traditional software processes

*(24:53): What is data leakage?

*(35:56): How can software engineers adapt in the age of AI?

*(47:04): Satyen’s takeaways

--------

Sponsor

This podcast is presented by Alation.

Learn more:

* Subscribe to the newsletter: https://www.alation.com/podcast/

* Alation’s LinkedIn Profile: https://www.linkedin.com/company/alation/

* Satyen’s LinkedIn Profile:

https://www.linkedin.com/in/ssangani/

--------

Links

Connect with Raza on LinkedIn

Learn more about Humanloop

Listen to Raza’s podcast

Order Information Theory, Inference and Learning Algorithms by David MacKay

  continue reading

72 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play