79 subscribers
Go offline with the Player FM app!
LangChain: LLM Integration for Elixir Apps with Mark Ericksen
Manage episode 488301540 series 2493466
Mark Ericksen, creator of the Elixir LangChain framework, joins the Elixir Wizards to talk about LLM integration in Elixir apps. He explains how LangChain abstracts away the quirks of different AI providers (OpenAI, Anthropic’s Claude, Google’s Gemini) so you can work with any LLM in one more consistent API. We dig into core features like conversation chaining, tool execution, automatic retries, and production-grade fallback strategies.
Mark shares his experiences maintaining LangChain in a fast-moving AI world: how it shields developers from API drift, manages token budgets, and handles rate limits and outages. He also reveals testing tactics for non-deterministic AI outputs, configuration tips for custom authentication, and the highlights of the new v0.4 release, including “content parts” support for thinking-style models.
Key topics discussed in this episode:
• Abstracting LLM APIs behind a unified Elixir interface
• Building and managing conversation chains across multiple models
• Exposing application functionality to LLMs through tool integrations
• Automatic retries and fallback chains for production resilience
• Supporting a variety of LLM providers
• Tracking and optimizing token usage for cost control
• Configuring API keys, authentication, and provider-specific settings
• Handling rate limits and service outages with degradation
• Processing multimodal inputs (text, images) in Langchain workflows
• Extracting structured data from unstructured LLM responses
• Leveraging “content parts” in v0.4 for advanced thinking-model support
• Debugging LLM interactions using verbose logging and telemetry
• Kickstarting experiments in LiveBook notebooks and demos
• Comparing Elixir LangChain to the original Python implementation
• Crafting human-in-the-loop workflows for interactive AI features
• Integrating Langchain with the Ash framework for chat-driven interfaces
• Contributing to open-source LLM adapters and staying ahead of API changes
• Building fallback chains (e.g., OpenAI → Azure) for seamless continuity
• Embedding business logic decisions directly into AI-powered tools
• Summarization techniques for token efficiency in ongoing conversations
• Batch processing tactics to leverage lower-cost API rate tiers
• Real-world lessons on maintaining uptime amid LLM service disruptions
Links mentioned:
https://rubyonrails.org/
https://fly.io/
https://zionnationalpark.com/
https://podcast.thinkingelixir.com/
https://github.com/brainlid/langchain
https://openai.com/
https://claude.ai/
https://gemini.google.com/
https://www.anthropic.com/
Vertex AI Studio https://cloud.google.com/generative-ai-studio
https://www.perplexity.ai/
https://azure.microsoft.com/
https://hexdocs.pm/ecto/Ecto.html
https://oban.pro/
Chris McCord’s ElixirConf EU 2025 Talk https://www.youtube.com/watch?v=ojL_VHc4gLk
Getting started:
https://hexdocs.pm/langchain/getting_started.html
https://ash-hq.org/
https://hex.pm/packages/langchain
https://hexdocs.pm/igniter/readme.html
https://www.youtube.com/watch?v=WM9iQlQSF_g
@brainlid on Twitter and BlueSky
Special Guest: Mark Ericksen.
196 episodes
Manage episode 488301540 series 2493466
Mark Ericksen, creator of the Elixir LangChain framework, joins the Elixir Wizards to talk about LLM integration in Elixir apps. He explains how LangChain abstracts away the quirks of different AI providers (OpenAI, Anthropic’s Claude, Google’s Gemini) so you can work with any LLM in one more consistent API. We dig into core features like conversation chaining, tool execution, automatic retries, and production-grade fallback strategies.
Mark shares his experiences maintaining LangChain in a fast-moving AI world: how it shields developers from API drift, manages token budgets, and handles rate limits and outages. He also reveals testing tactics for non-deterministic AI outputs, configuration tips for custom authentication, and the highlights of the new v0.4 release, including “content parts” support for thinking-style models.
Key topics discussed in this episode:
• Abstracting LLM APIs behind a unified Elixir interface
• Building and managing conversation chains across multiple models
• Exposing application functionality to LLMs through tool integrations
• Automatic retries and fallback chains for production resilience
• Supporting a variety of LLM providers
• Tracking and optimizing token usage for cost control
• Configuring API keys, authentication, and provider-specific settings
• Handling rate limits and service outages with degradation
• Processing multimodal inputs (text, images) in Langchain workflows
• Extracting structured data from unstructured LLM responses
• Leveraging “content parts” in v0.4 for advanced thinking-model support
• Debugging LLM interactions using verbose logging and telemetry
• Kickstarting experiments in LiveBook notebooks and demos
• Comparing Elixir LangChain to the original Python implementation
• Crafting human-in-the-loop workflows for interactive AI features
• Integrating Langchain with the Ash framework for chat-driven interfaces
• Contributing to open-source LLM adapters and staying ahead of API changes
• Building fallback chains (e.g., OpenAI → Azure) for seamless continuity
• Embedding business logic decisions directly into AI-powered tools
• Summarization techniques for token efficiency in ongoing conversations
• Batch processing tactics to leverage lower-cost API rate tiers
• Real-world lessons on maintaining uptime amid LLM service disruptions
Links mentioned:
https://rubyonrails.org/
https://fly.io/
https://zionnationalpark.com/
https://podcast.thinkingelixir.com/
https://github.com/brainlid/langchain
https://openai.com/
https://claude.ai/
https://gemini.google.com/
https://www.anthropic.com/
Vertex AI Studio https://cloud.google.com/generative-ai-studio
https://www.perplexity.ai/
https://azure.microsoft.com/
https://hexdocs.pm/ecto/Ecto.html
https://oban.pro/
Chris McCord’s ElixirConf EU 2025 Talk https://www.youtube.com/watch?v=ojL_VHc4gLk
Getting started:
https://hexdocs.pm/langchain/getting_started.html
https://ash-hq.org/
https://hex.pm/packages/langchain
https://hexdocs.pm/igniter/readme.html
https://www.youtube.com/watch?v=WM9iQlQSF_g
@brainlid on Twitter and BlueSky
Special Guest: Mark Ericksen.
196 episodes
All episodes
×
1 SDUI at Scale: GraphQL & Elixir at Cars.com with Zack Kayser 49:18

1 Rustler: Bridging Elixir and Rust with Sonny Scroggin 48:58

1 Nx and Machine Learning in Elixir with Sean Moriarity 44:21

1 LangChain: LLM Integration for Elixir Apps with Mark Ericksen 38:18

1 Blue Heron: Bluetooth Low Energy (BLE) for Elixir & Nerves with Connor Rigby 46:16

1 Zigler: Zig NIFs for Elixir with Isaac Yonemoto 43:00

1 Building an Open Vehicle Control System using Elixir and Nerves with Marc, Thibault, and Loïc 54:19

1 Creating Horizon: Deploy Elixir Phoenix Apps on FreeBSD with Jim Freeze 44:48

1 Telemetry & Observability for Elixir Apps at Cars.com with Zack Kayser & Ethan Gunderson 42:39

1 Scaling the Daylite Apple-Native CRM Using Elixir with AJ 52:21

1 Creating the Castmagic AI-Powered Content Workflow Platform with Justin Tormey 35:40

1 Creating the Standd AI-Native Due Diligence Platform with Stephen Solka 48:44

1 Creating the WebAuthn Components Library for Phoenix LiveView Apps with Owen Bickford 57:32

1 Creating a Terrestrial Telescope using Nerves & LiveView with Lucas Sifoni 49:56

1 Creating a Local-First Offline-Enabled LiveView PWA with Tony Dang 48:18
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.