Artwork

Content provided by Jon Krohn. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jon Krohn or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

879: Serverless, Parallel, and AI-Assisted: The Future of Data Science is Here, with Zerve’s Dr. Greg Michaelson

1:07:14
 
Share
 

Manage episode 477172251 series 2532807
Content provided by Jon Krohn. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jon Krohn or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Greg Michaelson speaks to Jon Krohn about the latest developments at Zerve, an operating system for developing and delivering data and AI products, including a revolutionary feature allowing users to run multiple parts of a program’s code at once and without extra costs. You’ll also hear why LLMs might spell trouble for SaaS companies, Greg’s ‘good-cop, bad-cop’ routine that improves LLM responses, and how RAG (retrieval-augmented generation) can be deployed to create even more powerful AI applications.

Additional materials: www.superdatascience.com/879

This episode is brought to you by Trainium2, the latest AI chip from AWS and by the Dell AI Factory with NVIDIA.

Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.

In this episode you will learn:

  • (04:00) Zerve’s latest features
  • (35:26) How Zerve’s built-in API builder and GPU manager lowers barriers to entry
  • (40:54) How to get started with Zerve
  • (41:49) Will LLMs make SaaS companies redundant?
  • (52:29) How to create fairer and more transparent AI systems
  • (56:07) The future of software developer workflows
  continue reading

960 episodes

Artwork
iconShare
 
Manage episode 477172251 series 2532807
Content provided by Jon Krohn. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jon Krohn or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Greg Michaelson speaks to Jon Krohn about the latest developments at Zerve, an operating system for developing and delivering data and AI products, including a revolutionary feature allowing users to run multiple parts of a program’s code at once and without extra costs. You’ll also hear why LLMs might spell trouble for SaaS companies, Greg’s ‘good-cop, bad-cop’ routine that improves LLM responses, and how RAG (retrieval-augmented generation) can be deployed to create even more powerful AI applications.

Additional materials: www.superdatascience.com/879

This episode is brought to you by Trainium2, the latest AI chip from AWS and by the Dell AI Factory with NVIDIA.

Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.

In this episode you will learn:

  • (04:00) Zerve’s latest features
  • (35:26) How Zerve’s built-in API builder and GPU manager lowers barriers to entry
  • (40:54) How to get started with Zerve
  • (41:49) Will LLMs make SaaS companies redundant?
  • (52:29) How to create fairer and more transparent AI systems
  • (56:07) The future of software developer workflows
  continue reading

960 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play