Artwork

Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 49: Why Data and AI Still Break at Scale (and What to Do About It)

1:21:45
 
Share
 

Manage episode 486996497 series 3317544
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

If we want AI systems that actually work in production, we need better infrastructure—not just better models.

In this episode, Hugo talks with Akshay Agrawal (Marimo, ex-Google Brain, Netflix, Stanford) about why data and AI pipelines still break down at scale, and how we can fix the fundamentals: reproducibility, composability, and reliable execution.

They discuss:
🔁 Why reactive execution matters—and how current tools fall short
🛠️ The design goals behind Marimo, a new kind of Python notebook
⚙️ The hidden costs of traditional workflows (and what breaks at scale)
📦 What it takes to build modular, maintainable AI apps
🧪 Why debugging LLM systems is so hard—and what better tooling looks like
🌍 What we can learn from decades of tools built for and by data practitioners

Toward the end of the episode, Hugo and Akshay walk through two live demos: Hugo shares how he’s been using Marimo to prototype an app that extracts structured data from world leader bios, and Akshay shows how Marimo handles agentic workflows with memory and tool use—built entirely in a notebook.

This episode is about tools, but it’s also about culture. If you’ve ever hit a wall with your current stack—or felt like your tools were working against you—this one’s for you.

LINKS

🎓 Want to go deeper?
Check out Hugo's course: Building LLM Applications for Data Scientists and Software Engineers.
Learn how to design, test, and deploy production-grade LLM systems — with observability, feedback loops, and structure built in.
This isn’t about vibes or fragile agents. It’s about making LLMs reliable, testable, and actually useful.

Includes over $800 in compute credits and guest lectures from experts at DeepMind, Moderna, and more.
Cohort starts July 8 — Use this link for a 10% discount

  continue reading

49 episodes

Artwork
iconShare
 
Manage episode 486996497 series 3317544
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

If we want AI systems that actually work in production, we need better infrastructure—not just better models.

In this episode, Hugo talks with Akshay Agrawal (Marimo, ex-Google Brain, Netflix, Stanford) about why data and AI pipelines still break down at scale, and how we can fix the fundamentals: reproducibility, composability, and reliable execution.

They discuss:
🔁 Why reactive execution matters—and how current tools fall short
🛠️ The design goals behind Marimo, a new kind of Python notebook
⚙️ The hidden costs of traditional workflows (and what breaks at scale)
📦 What it takes to build modular, maintainable AI apps
🧪 Why debugging LLM systems is so hard—and what better tooling looks like
🌍 What we can learn from decades of tools built for and by data practitioners

Toward the end of the episode, Hugo and Akshay walk through two live demos: Hugo shares how he’s been using Marimo to prototype an app that extracts structured data from world leader bios, and Akshay shows how Marimo handles agentic workflows with memory and tool use—built entirely in a notebook.

This episode is about tools, but it’s also about culture. If you’ve ever hit a wall with your current stack—or felt like your tools were working against you—this one’s for you.

LINKS

🎓 Want to go deeper?
Check out Hugo's course: Building LLM Applications for Data Scientists and Software Engineers.
Learn how to design, test, and deploy production-grade LLM systems — with observability, feedback loops, and structure built in.
This isn’t about vibes or fragile agents. It’s about making LLMs reliable, testable, and actually useful.

Includes over $800 in compute credits and guest lectures from experts at DeepMind, Moderna, and more.
Cohort starts July 8 — Use this link for a 10% discount

  continue reading

49 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play