Artwork

Content provided by Bret Fisher. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Bret Fisher or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Docker Model Runner

13:06
 
Share
 

Manage episode 478283639 series 2483573
Content provided by Bret Fisher. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Bret Fisher or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Docker launched "Docker Model Runner" to run LLMs through llama.cpp with a single "docker model" command. In this episode Bret details examples and some useful use cases for using this way to run LLMs. He breaks down the internals. How it works, when you should use it or not use it; and, how to get started using Open WebUI for a private ChatGPT-like experience.

★Topics★
Model Runner Docs
Hub Models
OCI Artifacts
Open WebUI
My Open WebUI Compose file

Creators & Guests

  • (00:00) - Intro
  • (00:46) - Model Runner Elevator Pitch
  • (01:28) - Enabling Docker Model Runner
  • (04:28) - Self Promotion! Is that an ad? For me?
  • (05:03) - Downloading Models
  • (07:11) - Architectrure of Model Runner
  • (10:49) - ORAS
  • (11:09) - What's next for Model Runner?
  • (12:13) - Troubleshooting

You can also support my content by subscribing to my YouTube channel and my weekly newsletter at bret.news!

Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com

  continue reading

184 episodes

Artwork
iconShare
 
Manage episode 478283639 series 2483573
Content provided by Bret Fisher. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Bret Fisher or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Docker launched "Docker Model Runner" to run LLMs through llama.cpp with a single "docker model" command. In this episode Bret details examples and some useful use cases for using this way to run LLMs. He breaks down the internals. How it works, when you should use it or not use it; and, how to get started using Open WebUI for a private ChatGPT-like experience.

★Topics★
Model Runner Docs
Hub Models
OCI Artifacts
Open WebUI
My Open WebUI Compose file

Creators & Guests

  • (00:00) - Intro
  • (00:46) - Model Runner Elevator Pitch
  • (01:28) - Enabling Docker Model Runner
  • (04:28) - Self Promotion! Is that an ad? For me?
  • (05:03) - Downloading Models
  • (07:11) - Architectrure of Model Runner
  • (10:49) - ORAS
  • (11:09) - What's next for Model Runner?
  • (12:13) - Troubleshooting

You can also support my content by subscribing to my YouTube channel and my weekly newsletter at bret.news!

Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com

  continue reading

184 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play