Artwork

Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 54: Scaling AI: From Colab to Clusters — A Practitioner’s Guide to Distributed Training and Inference

41:17
 
Share
 

Manage episode 495162264 series 3317544
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Colab is cozy. But production won’t fit on a single GPU.
Zach Mueller leads Accelerate at Hugging Face and spends his days helping people go from solo scripts to scalable systems. In this episode, he joins me to demystify distributed training and inference — not just for research labs, but for any ML engineer trying to ship real software.

We talk through:
• From Colab to clusters: why scaling isn’t just about training massive models, but serving agents, handling load, and speeding up iteration
• Zero-to-two GPUs: how to get started without Kubernetes, Slurm, or a PhD in networking
• Scaling tradeoffs: when to care about interconnects, which infra bottlenecks actually matter, and how to avoid chasing performance ghosts
• The GPU middle class: strategies for training and serving on a shoestring, with just a few cards or modest credits
• Local experiments, global impact: why learning distributed systems—even just a little—can set you apart as an engineer

If you’ve ever stared at a Hugging Face training script and wondered how to run it on something more than your laptop: this one’s for you.

LINKS

🎓 Learn more:

📺 Watch the video version on YouTube: YouTube link

  continue reading

54 episodes

Artwork
iconShare
 
Manage episode 495162264 series 3317544
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Colab is cozy. But production won’t fit on a single GPU.
Zach Mueller leads Accelerate at Hugging Face and spends his days helping people go from solo scripts to scalable systems. In this episode, he joins me to demystify distributed training and inference — not just for research labs, but for any ML engineer trying to ship real software.

We talk through:
• From Colab to clusters: why scaling isn’t just about training massive models, but serving agents, handling load, and speeding up iteration
• Zero-to-two GPUs: how to get started without Kubernetes, Slurm, or a PhD in networking
• Scaling tradeoffs: when to care about interconnects, which infra bottlenecks actually matter, and how to avoid chasing performance ghosts
• The GPU middle class: strategies for training and serving on a shoestring, with just a few cards or modest credits
• Local experiments, global impact: why learning distributed systems—even just a little—can set you apart as an engineer

If you’ve ever stared at a Hugging Face training script and wondered how to run it on something more than your laptop: this one’s for you.

LINKS

🎓 Learn more:

📺 Watch the video version on YouTube: YouTube link

  continue reading

54 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play