Artwork

Content provided by Sama. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sama or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

dbt Labs Co-Founder Drew Banin

28:02
 
Share
 

Manage episode 451331100 series 3321644
Content provided by Sama. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sama or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Key Points From This Episode:

  • Drew and his co-founders’ background working together at RJ Metrics.
  • The lack of existing data solutions for Amazon Redshift and how they started dbt Labs.
  • Initial adoption of dbt Labs and why it was so well-received from the very beginning.
  • The concept of a semantic layer and how dbt Labs uses it in conjunction with LLMs.
  • Drew’s insights on a recent paper by Apple on the limitations of LLMs’ reasoning.
  • Unpacking examples where LLMs struggle with specific questions, like math problems.
  • The importance of thoughtful prompt engineering and application design with LLMs.
  • What is needed to maximize the utility of LLMs in enterprise settings.
  • How understanding the specific use case can help you get better results from LLMs.
  • What developers can do to constrain the search space and provide better output.
  • Why Drew believes prompt engineering will become less important for the average user.
  • The exciting potential of vector embeddings and the ongoing evolution of LLMs.

Quotes:

“Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]

“One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]

“I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]

“My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]

Links Mentioned in Today’s Episode:

Understanding the Limitations of Mathematical Reasoning in Large Language Models
Drew Banin on LinkedIn
dbt Labs

How AI Happens

Sama

  continue reading

122 episodes

Artwork

dbt Labs Co-Founder Drew Banin

How AI Happens

14 subscribers

published

iconShare
 
Manage episode 451331100 series 3321644
Content provided by Sama. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sama or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Key Points From This Episode:

  • Drew and his co-founders’ background working together at RJ Metrics.
  • The lack of existing data solutions for Amazon Redshift and how they started dbt Labs.
  • Initial adoption of dbt Labs and why it was so well-received from the very beginning.
  • The concept of a semantic layer and how dbt Labs uses it in conjunction with LLMs.
  • Drew’s insights on a recent paper by Apple on the limitations of LLMs’ reasoning.
  • Unpacking examples where LLMs struggle with specific questions, like math problems.
  • The importance of thoughtful prompt engineering and application design with LLMs.
  • What is needed to maximize the utility of LLMs in enterprise settings.
  • How understanding the specific use case can help you get better results from LLMs.
  • What developers can do to constrain the search space and provide better output.
  • Why Drew believes prompt engineering will become less important for the average user.
  • The exciting potential of vector embeddings and the ongoing evolution of LLMs.

Quotes:

“Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]

“One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]

“I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]

“My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]

Links Mentioned in Today’s Episode:

Understanding the Limitations of Mathematical Reasoning in Large Language Models
Drew Banin on LinkedIn
dbt Labs

How AI Happens

Sama

  continue reading

122 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play