Artwork

Content provided by Charles M Wood. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Charles M Wood or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The Magic of RubyLLM with Carmine Paolino - RUBY 676

1:15:08
 
Share
 

Manage episode 484199342 series 2333152
Content provided by Charles M Wood. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Charles M Wood or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
In this episode, we had the absolute pleasure of sitting down with Carmine Paolino — an AI innovator, Ruby enthusiast, and all-around tech wizard. From his early days automating PC games at age five to building cutting-edge AI tools in Berlin, Carmine’s journey is as inspiring as it is impressive.
We dove deep into his latest creation: RubyLLM, a Ruby gem that simplifies working with large language models (LLMs) like GPT-4, Claude, and Gemini. Think of it as an intuitive, plug-and-play toolkit that lets Ruby developers tap into powerful AI features — chat, image generation, embedding, tools, and even multi-model support — with just a few lines of code. And yes, it’s as awesome as it sounds.
Key Takeaways:
  • RubyLLM is built for simplicity and power. Carmine wanted a tool that “just works” — one unified interface for chatting, streaming, tool use, image generation, and more. It abstracts away the API mess and keeps things Ruby-friendly.
  • Tooling support is next-level. RubyLLM allows for agentic AI by letting devs define tools (like checking the weather or sending a calendar invite). The gem handles when and how to use them — magic!
  • Support for multiple models and providers. OpenAI, Anthropic, Google — RubyLLM makes it easy to switch between them seamlessly, even mid-conversation. Carmine also teased a future integration with a smarter model registry via an AI-powered API called Parsera.
  • Streaming and performance? Covered. Carmine shares clever architecture tricks using Turbo Streams and async Ruby for blazing-fast, lightweight responses — even when handling many concurrent users.
  • Real-world use case: ChatWithWork. Carmine’s app lets users “chat” with their docs from Google Drive, Notion, and Slack. RubyLLM is the backbone, and it’s got real startup traction. (Oh, and he DJed the night it went viral on Hacker News.)
  • Embeddings and image generation are just as easy. Need vector search or auto-generated podcast art? Just call .embed or .paint — seriously, that’s it.

Become a supporter of this podcast: https://www.spreaker.com/podcast/ruby-rogues--6102073/support.
  continue reading

763 episodes

Artwork
iconShare
 
Manage episode 484199342 series 2333152
Content provided by Charles M Wood. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Charles M Wood or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
In this episode, we had the absolute pleasure of sitting down with Carmine Paolino — an AI innovator, Ruby enthusiast, and all-around tech wizard. From his early days automating PC games at age five to building cutting-edge AI tools in Berlin, Carmine’s journey is as inspiring as it is impressive.
We dove deep into his latest creation: RubyLLM, a Ruby gem that simplifies working with large language models (LLMs) like GPT-4, Claude, and Gemini. Think of it as an intuitive, plug-and-play toolkit that lets Ruby developers tap into powerful AI features — chat, image generation, embedding, tools, and even multi-model support — with just a few lines of code. And yes, it’s as awesome as it sounds.
Key Takeaways:
  • RubyLLM is built for simplicity and power. Carmine wanted a tool that “just works” — one unified interface for chatting, streaming, tool use, image generation, and more. It abstracts away the API mess and keeps things Ruby-friendly.
  • Tooling support is next-level. RubyLLM allows for agentic AI by letting devs define tools (like checking the weather or sending a calendar invite). The gem handles when and how to use them — magic!
  • Support for multiple models and providers. OpenAI, Anthropic, Google — RubyLLM makes it easy to switch between them seamlessly, even mid-conversation. Carmine also teased a future integration with a smarter model registry via an AI-powered API called Parsera.
  • Streaming and performance? Covered. Carmine shares clever architecture tricks using Turbo Streams and async Ruby for blazing-fast, lightweight responses — even when handling many concurrent users.
  • Real-world use case: ChatWithWork. Carmine’s app lets users “chat” with their docs from Google Drive, Notion, and Slack. RubyLLM is the backbone, and it’s got real startup traction. (Oh, and he DJed the night it went viral on Hacker News.)
  • Embeddings and image generation are just as easy. Need vector search or auto-generated podcast art? Just call .embed or .paint — seriously, that’s it.

Become a supporter of this podcast: https://www.spreaker.com/podcast/ruby-rogues--6102073/support.
  continue reading

763 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play