Artwork

Player FM - Internet Radio Done Right

56 subscribers

Checked 3d ago
Added four years ago
Content provided by Demetrios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Demetrios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Serving ML Models at a High Scale with Low Latency // Manoj Agarwal // MLOps Meetup #48

56:17
 
Share
 

Manage episode 313294475 series 3241972
Content provided by Demetrios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Demetrios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

MLOps community meetup #48! Last Wednesday, we talked to Manoj Agarwal, Software Architect at Salesforce.
// Abstract:
Serving machine learning models is a scalability challenge at many companies. Most applications require a small number of machine learning models (often < 100) to serve predictions. On the other hand, cloud platforms that support model serving, though they support hundreds of thousands of models, provision separate hardware for different customers. Salesforce has a unique challenge that only very few companies deal with; Salesforce needs to run hundreds of thousands of models sharing the underlying infrastructure for multiple tenants for cost-effectiveness.
// Takeaways:
This talk explains Salesforce hosts hundreds of thousands of models on a multi-tenant infrastructure to support low-latency predictions.
// Bio:
Manoj Agarwal is a Software Architect in the Einstein Platform team at Salesforce. Salesforce Einstein was released back in 2016, integrated with all the major Salesforce clouds. Fast forward to today and Einstein is delivering 80+ billion predictions across Sales, Service, Marketing & Commerce Clouds per day.

//Relevant Links
https://engineering.salesforce.com/flow-scheduling-for-the-einstein-ml-platform-b11ec4f74f97
https://engineering.salesforce.com/ml-lake-building-salesforces-data-platform-for-machine-learning-228c30e21f16
----------- Connect With Us ✌️-------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Manoj on LinkedIn: https://www.linkedin.com/in/agarwalmk/
Timestamps:
[00:00] Happy birthday Manoj!
[00:41] Salesforce blog post about Einstein and ML Infrastructure
[02:55] Intro to Serving Large Number of Models with Low Latency
[03:34] Manoj' background
[04:22] Machine Learning Engineering: 99% engineering + 1% machine learning - Alexey Gregorev on Twitter
[04:37] Salesforce Einstein
[06:42] Machine Learning: Big Picture
[07:05] Feature Engineering [07:30] Model Training
[08:53] Model Serving Requirements
[13:01] Do you standardize on how models are packaged in order to be served and if so, what standards Salesforce require and enforce from model packaging?
[14:29] Support Multiple Frameworks
[16:16] Is it easy to just throw a software library in there?
[27:06] Along with that metadata, can you breakdown how that goes?
[28:27] Low Latency
[32:30] Model Sharding with Replication
[33:58] What would you do to speed up transformation code run before scoring?
[35:55] Model Serving Scaling
[37:06] Noisy Neighbor: Shuffle Sharding
[39:29] If all the Salesforce Models can be categorized into different model type, based on what they provide, what would be some of the big categories be and what's the biggest?
[46:27] Retraining of the Model: Does that deal with your team or is that distributed out and your team deals mainly this kind of engineering and then another team deal with more machine learning concepts of it?
[50:13] How do you ensure different models created by different teams for data scientists expose the same data in order to be analyzed?
[52:08] Are you using Kubernetes or is it another registration engine? [53:03] How is it ensured that different models expose the same information?

  continue reading

449 episodes

Artwork
iconShare
 
Manage episode 313294475 series 3241972
Content provided by Demetrios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Demetrios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

MLOps community meetup #48! Last Wednesday, we talked to Manoj Agarwal, Software Architect at Salesforce.
// Abstract:
Serving machine learning models is a scalability challenge at many companies. Most applications require a small number of machine learning models (often < 100) to serve predictions. On the other hand, cloud platforms that support model serving, though they support hundreds of thousands of models, provision separate hardware for different customers. Salesforce has a unique challenge that only very few companies deal with; Salesforce needs to run hundreds of thousands of models sharing the underlying infrastructure for multiple tenants for cost-effectiveness.
// Takeaways:
This talk explains Salesforce hosts hundreds of thousands of models on a multi-tenant infrastructure to support low-latency predictions.
// Bio:
Manoj Agarwal is a Software Architect in the Einstein Platform team at Salesforce. Salesforce Einstein was released back in 2016, integrated with all the major Salesforce clouds. Fast forward to today and Einstein is delivering 80+ billion predictions across Sales, Service, Marketing & Commerce Clouds per day.

//Relevant Links
https://engineering.salesforce.com/flow-scheduling-for-the-einstein-ml-platform-b11ec4f74f97
https://engineering.salesforce.com/ml-lake-building-salesforces-data-platform-for-machine-learning-228c30e21f16
----------- Connect With Us ✌️-------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Manoj on LinkedIn: https://www.linkedin.com/in/agarwalmk/
Timestamps:
[00:00] Happy birthday Manoj!
[00:41] Salesforce blog post about Einstein and ML Infrastructure
[02:55] Intro to Serving Large Number of Models with Low Latency
[03:34] Manoj' background
[04:22] Machine Learning Engineering: 99% engineering + 1% machine learning - Alexey Gregorev on Twitter
[04:37] Salesforce Einstein
[06:42] Machine Learning: Big Picture
[07:05] Feature Engineering [07:30] Model Training
[08:53] Model Serving Requirements
[13:01] Do you standardize on how models are packaged in order to be served and if so, what standards Salesforce require and enforce from model packaging?
[14:29] Support Multiple Frameworks
[16:16] Is it easy to just throw a software library in there?
[27:06] Along with that metadata, can you breakdown how that goes?
[28:27] Low Latency
[32:30] Model Sharding with Replication
[33:58] What would you do to speed up transformation code run before scoring?
[35:55] Model Serving Scaling
[37:06] Noisy Neighbor: Shuffle Sharding
[39:29] If all the Salesforce Models can be categorized into different model type, based on what they provide, what would be some of the big categories be and what's the biggest?
[46:27] Retraining of the Model: Does that deal with your team or is that distributed out and your team deals mainly this kind of engineering and then another team deal with more machine learning concepts of it?
[50:13] How do you ensure different models created by different teams for data scientists expose the same data in order to be analyzed?
[52:08] Are you using Kubernetes or is it another registration engine? [53:03] How is it ensured that different models expose the same information?

  continue reading

449 episodes

All episodes

×
 
Kai Wang joins the MLOps Community podcast LIVE to share how Uber built and scaled its ML platform, Michelangelo. From mission-critical models to tools for both beginners and experts, he walks us through Uber’s AI playbook—and teases plans to open-source parts of it. // Bio Kai Wang is the product lead of the AI platform team at Uber, overseeing Uber's internal end-to-end ML platform called Michelangelo that powers 100% Uber's business-critical ML use cases. // Related Links Uber GenAI: https://www.uber.com/blog/from-predictive-to-generative-ai/ #uber #podcast #ai #machinelearning ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Kai on LinkedIn: /kai-wang-67457318/ Timestamps: [00:00] Rethinking AI Beyond ChatGPT [04:01] How Devs Pick Their Tools [08:25] Measuring Dev Speed Smartly [10:14] Predictive Models at Uber [13:11] When ML Strategy Shifts [15:56] Smarter Uber Eats with AI [19:29] Summarizing Feedback with ML [23:27] GenAI That Users Notice [27:19] Inference at Scale: Michelangelo [32:26] Building Uber’s AI Studio [33:50] Faster AI Agents, Less Pain [39:21] Evaluating Models at Uber [42:22] Why Uber Open-Sourced Machanjo [44:32] What Fuels Uber’s AI Team…
 
The Missing Data Stack for Physical AI // MLOps Podcast #328 with Nikolaus West, CEO of Rerun. Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // Abstract Nikolaus West, CEO of Rerun, breaks down the challenges and opportunities of physical AI—AI that interacts with the real world. He explains why traditional software falls short in dynamic environments and how visualization, adaptability, and better tooling are key to making robotics and spatial computing more practical. // Bio Niko is a second-time founder and software engineer with a computer vision background from Stanford. He’s a fanatic about bringing great computer vision and robotics products to the physical world. // Related Links Website: rerun.io ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [ https://go.mlops.community/slack ] Follow us on X/Twitter [ @mlopscommunity ]( https://x.com/mlopscommunity ) or [LinkedIn]( https://go.mlops.community/linkedin )] Sign up for the next meetup: [ https://go.mlops.community/register ] MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Niko on LinkedIn: /NikolausWest Timestamps: [00:00] Niko's preferred coffee [00:35] Physical AI vs Robotics Debate [04:40] IoT Hype vs Reality [12:16] Physical AI Lifecycle Overview [20:05] AI Constraints in Robotics [23:42] Data Challenges in Robotics [33:37] Open Sourcing AI Tools [39:36] Rerun Platform Integration [40:57] Data Integration for Insights [45:02] Data Pipelines and Quality [49:19] Robotics Design Trade-offs [52:25] Wrap up…
 
LLMs are reshaping the future of data and AI—and ignoring them might just be career malpractice. Yoni Michael and Kostas Pardalis unpack what’s breaking, what’s emerging, and why inference is becoming the new heartbeat of the data pipeline. // Bio Kostas Pardalis Kostas is an engineer-turned-entrepreneur with a passion for building products and companies in the data space. He’s currently the co-founder of Typedef. Before that, he worked closely with the creators of Trino at Starburst Data on some exciting projects. Earlier in his career, he was part of the leadership team at Rudderstack, helping the company grow from zero to a successful Series B in under two years. He also founded Blendo in 2014, one of the first cloud-based ELT solutions. Yoni Michael Yoni is the Co-Founder of typedef, a serverless data platform purpose-built to help teams process unstructured text and run LLM inference pipelines at scale. With a deep background in data infrastructure, Yoni has spent over a decade building systems at the intersection of data and AI — including leading infrastructure at Tecton and engineering teams at Salesforce. Yoni is passionate about rethinking how teams extract insight from massive troves of text, transcripts, and documents — and believes the future of analytics depends on bridging traditional data pipelines with modern AI workflows. At Typedef, he’s working to make that future accessible to every team, without the complexity of managing infrastructure. // Related Links Website: https://www.typedef.ai https://techontherocks.show https://www.cpard.xyz ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Kostas on LinkedIn: /kostaspardalis/ Connect with Yoni on LinkedIn: / yonimichael / Timestamps: [00:00] Breaking Tools, Evolving Data Workloads [06:35] Building Truly Great Data Teams [10:49] Making Data Platforms Actually Useful [18:54] Scaling AI with Native Integration [24:04] Empowering Employees to Build Agents [28:17] Rise of the AI Sherpa [36:09] Real AI Infrastructure Pain Points [38:05] Fixing Gaps Between Data, AI [46:04] Smarter Decisions Through Better Data [50:18] LLMs as Human-Machine Interfaces [53:40] Why Summarization Still Falls Short [01:01:15] Smarter Chunking, Fixing Text Issues [01:09:08] Evaluating AI with Canary Pipelines [01:11:46] Finding Use Cases That Matter [01:17:38] Cutting Costs, Keeping AI Quality [01:25:15] Aligning MLOps to Business Outcomes [01:29:44] Communities Thrive on Cross-Pollination [01:34:56] Evaluation Tools Quietly Consolidating…
 
What makes a good AI benchmark? Greg Kamradt joins Demetrios to break it down—from human-easy, AI-hard puzzles to wild new games that test how fast models can truly learn. They talk hidden datasets, compute tradeoffs, and why benchmarks might be our best bet for tracking progress toward AGI. It’s nerdy, strategic, and surprisingly philosophical. // Bio Greg has mentored thousands of developers and founders, empowering them to build AI-centric applications.By crafting tutorial-based content, Greg aims to guide everyone from seasoned builders to ambitious indie hackers.Greg partners with companies during their product launches, feature enhancements, and funding rounds. His objective is to cultivate not just awareness, but also a practical understanding of how to optimally utilize a company's tools.He previously led Growth @ Salesforce for Sales & Service Clouds in addition to being early on at Digits, a FinTech Series-C company. // Related Links Website: https://gregkamradt.com/ YouTube channel: https://www.youtube.com/@DataIndependent ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Greg on LinkedIn: /gregkamradt/ Timestamps: [00:00] Human-Easy, AI-Hard [05:25] When the Model Shocks Everyone [06:39] “Let’s Circle Back on That Benchmark…” [09:50] Want Better AI? Pay the Compute Bill [14:10] Can We Define Intelligence by How Fast You Learn? [16:42] Still Waiting on That Algorithmic Breakthrough [20:00] LangChain Was Just the Beginning [24:23] Start With Humans, End With AGI [29:01] What If Reality’s Just... What It Seems? [32:21] AI Needs Fewer Vibes, More Predictions [36:02] Defining Intelligence (No Pressure) [36:41] AI Building AI? Yep, We're Going There [40:13] Open Source vs. Prize Money Drama [43:05] Architecting the ARC Challenge [46:38] Agent 57 and the Atari Gauntlet…
 
Bridging the Gap Between AI and Business Data // MLOps Podcast #325 with Deepti Srivastava, Founder and CEO at Snow Leopard. Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // Abstract I’m sure the MLOps community is probably aware – it's tough to make AI work in enterprises for many reasons, from data silos, data privacy and security concerns, to going from POCs to production applications. But one of the biggest challenges facing businesses today, that I particularly care about, is how to unlock the true potential of AI by leveraging a company’s operational business data. At Snow Leopard, we aim to bridge the gap between AI systems and critical business data that is locked away in databases, data warehouses, and other API-based systems, so enterprises can use live business data from any data source – whether it's database, warehouse, or APIs – in real time and on demand, natively. In this interview, I'd like to cover Snow Leopard’s intelligent data retrieval approach that can leverage business data directly and on-demand to make AI work. // Bio Deepti is the founder and CEO of Snow Leopard AI, a platform that helps teams build AI apps using their live business data, on-demand. She has nearly 2 decades of experience in data platforms and infrastructure. As Head of Product at Observable, Deepti led the 0→1 product and GTM strategy in the crowded data analytics market. Before that, Deepti was the founding PM for Google Spanner, growing it to thousands of internal customers (Ads, PlayStore, Gmail, etc.), before launching it externally as a seminal cloud database service. Deepti started her career as a distributed systems engineer in the RAC database kernel at Oracle. // Related Links Website: https://www.snowleopard.ai/ AI SQL Data Analyst // Donné Stevenson - https://youtu.be/hwgoNmyCGhQ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [ https://go.mlops.community/slack ] Follow us on X/Twitter [ @mlopscommunity ]( https://x.com/mlopscommunity ) or [LinkedIn]( https://go.mlops.community/linkedin )] Sign up for the next meetup: [ https://go.mlops.community/register ] MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Deepti on LinkedIn: /thedeepti/ Timestamps: [00:00] Deepti's preferred coffee [00:49] MLflow vs Kubeflow Debate [04:58] GenAI Data Integration Challenges [09:02] GenAI Sidecar Spicy Takes [14:07] Troubleshooting LLM Hallucinations [19:03] AI Overengineering and Hype [25:06] Self-Serve Analytics Governance [33:29] Dashboards vs Data Quality [37:06] Agent Database Context Control [43:00] LLM as Orchestrator [47:34] Tool Call Ownership Clarification [51:45] MCP Server Challenges [56:52] Wrap up…
 
The Creator of FastAPI’s Next Chapter // MLOps Podcast #324 with Sebastián Ramírez, Developer at FastAPI Labs. Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // Abstract The creator of FastAPI is back with a new chapter—FastAPI Cloud. From building one of the most loved dev tools to launching a company, Sebastián Ramírez shares how open source, developer experience, and a dash of humor are shaping the future of APIs. // Bio Sebastián Ramírez (also known as Tiangolo) is the creator of FastAPI, Typer, SQLModel, Asyncer, and several other widely used open-source tools. He has collaborated with companies and teams around the world—from Latin America to the Middle East, Europe, and the United States—building a range of products and custom solutions focused on APIs, data processing, distributed systems, and machine learning. Today, he works full time on FastAPI and its growing ecosystem. // Related Links Website: https://tiangolo.com/ FastAPI: https://fastapi.tiangolo.com/ FastAPI Cloud: https://fastapicloud.com/ FastAPI for Machine Learning // Sebastián Ramírez // MLOps Coffee Sessions #96 - https://youtu.be/NpvRhZnkEFg ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [ @mlopscommunity ]( https://x.com/mlopscommunity ) or [LinkedIn]( https://go.mlops.community/linkedin )] Sign up for the next meetup: [ https://go.mlops.community/register ] MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Tiangolo on LinkedIn: /tiangolo Timestamps: [00:00] Sebastián's preferred coffee [00:15] Takeaways [01:43] Why Pydantic is Awesome [06:47] ML Background and FastAPI [10:44] NASA FastAPI Emojis [15:21] FastAPI Cloud Journey [26:07] FastAPI Cloud Open-Source Balance [31:45] Basecamp Design Philosophy [35:30] AI Abstraction Strategies [42:56] Engineering vs Developer Experience [51:40] Dogfooding and Docs Strategy [59:44] Code Simplicity and Trust [1:04:26] Scaling Without Losing Vision [1:08:20] FastAPI Cloud Signup [1:09:23] Wrap up…
 
Willem Pienaar and Shreya Shankar discuss the challenge of evaluating agents in production where "ground truth" is ambiguous and subjective user feedback isn't enough to improve performance. The discussion breaks down the three "gulfs" of human-AI interaction—Specification, Generalization, and Comprehension—and their impact on agent success. Willem and Shreya cover the necessity of moving the human "out of the loop" for feedback, creating faster learning cycles through implicit signals rather than direct, manual review.The conversation details practical evaluation techniques, including analyzing task failures with heat maps and the trade-offs of using simulated environments for testing. Willem and Shreya address the reality of a "performance ceiling" for AI and the importance of categorizing problems your agent can, can learn to, or will likely never be able to solve. // Bio Shreya Shankar PhD student in data management for machine learning. Willem Pienaar Willem Pienaar, CTO of Cleric, is a builder with a focus on LLM agents, MLOps, and open source tooling. He is the creator of Feast, an open source feature store, and contributed to the creation of both the feature store and MLOps categories. Before starting Cleric, Willem led the open source engineering team at Tecton and established the ML platform team at Gojek, where he built high scale ML systems for the Southeast Asian decacorn. // Related Links https://www.google.com/about/careers/applications/?utm_campaign=profilepage&utm_medium=profilepage&utm_source=linkedin&src=Online/LinkedIn/linkedin_page https://cleric.ai/ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Shreya on LinkedIn: /shrshnk Connect with Willem on LinkedIn: /willempienaar Timestamps: [00:00] Trust Issues in AI Data [04:49] Cloud Clarity Meets Retrieval [09:37] Why Fast AI Is Hard [11:10] Fixing AI Communication Gaps [14:53] Smarter Feedback for Prompts [19:23] Creativity Through Data Exploration [23:46] Helping Engineers Solve Faster [26:03] The Three Gaps in AI [28:08] Alerts Without the Noise [33:22] Custom vs General AI [34:14] Sharpening Agent Skills [40:01] Catching Repeat Failures [43:38] Rise of Self-Healing Software [44:12] The Chaos of Monitoring AI…
 
Tricks to Fine Tuning // MLOps Podcast #318 with Prithviraj Ammanabrolu, Research Scientist at Databricks . Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // Abstract Prithviraj Ammanabrolu drops by to break down Tao fine-tuning—a clever way to train models without labeled data. Using reinforcement learning and synthetic data, Tao teaches models to evaluate and improve themselves. Raj explains how this works, where it shines (think small models punching above their weight), and why it could be a game-changer for efficient deployment. // Bio Raj is an Assistant Professor of Computer Science at the University of California, San Diego, leading the PEARLS Lab in the Department of Computer Science and Engineering (CSE). He is also a Research Scientist at Mosaic AI, Databricks, where his team is actively recruiting research scientists and engineers with expertise in reinforcement learning and distributed systems. Previously, he was part of the Mosaic team at the Allen Institute for AI. He earned his PhD in Computer Science from the School of Interactive Computing at Georgia Tech, advised by Professor Mark Riedl in the Entertainment Intelligence Lab. // Related Links Website: https://www.databricks.com/ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [ https://go.mlops.community/slack ] Follow us on X/Twitter [ @mlopscommunity ]( https://x.com/mlopscommunity ) or [LinkedIn]( https://go.mlops.community/linkedin )] Sign up for the next meetup: [ https://go.mlops.community/register ] MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Raj on LinkedIn: /rajammanabrolu Timestamps: [00:00] Raj's preferred coffee [00:36] Takeaways [01:02] Tao Naming Decision [04:19] No Labels Machine Learning [08:09] Tao and TAO breakdown [13:20] Reward Model Fine-Tuning [18:15] Training vs Inference Compute [22:32] Retraining and Model Drift [29:06] Prompt Tuning vs Fine-Tuning [34:32] Small Model Optimization Strategies [37:10] Small Model Potential [43:08] Fine-tuning Model Differences [46:02] Mistral Model Freedom [53:46] Wrap up…
 
Packaging MLOps Tech Neatly for Engineers and Non-engineers // MLOps Podcast #322 with Jukka Remes, Senior Lecturer (SW dev & AI), AI Architect at Haaga-Helia UAS, Founder & CTO at 8wave AI. Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // Abstract AI is already complex—adding the need for deep engineering expertise to use MLOps tools only makes it harder, especially for SMEs and research teams with limited resources. Yet, good MLOps is essential for managing experiments, sharing GPU compute, tracking models, and meeting AI regulations. While cloud providers offer MLOps tools, many organizations need flexible, open-source setups that work anywhere—from laptops to supercomputers. Shared setups can boost collaboration, productivity, and compute efficiency.In this session, Jukka introduces an open-source MLOps platform from Silo AI, now packaged for easy deployment across environments. With Git-based workflows and CI/CD automation, users can focus on building models while the platform handles the MLOps.// BioFounder & CTO, 8wave AI | Senior Lecturer, Haaga-Helia University of Applied SciencesJukka Remes has 28+ years of experience in software, machine learning, and infrastructure. Starting with SW dev in the late 1990s and analytics pipelines of fMRI research in early 2000s, he’s worked across deep learning (Nokia Technologies), GPU and cloud infrastructure (IBM), and AI consulting (Silo AI), where he also led MLOps platform development. Now a senior lecturer at Haaga-Helia, Jukka continues evolving that open-source MLOps platform with partners like the University of Helsinki. He leads R&D on GenAI and AI-enabled software, and is the founder of 8wave AI, which develops AI Business Operations software for next-gen AI enablement, including regulatory compliance of AI. // Related Links Open source -based MLOps k8s platform setup originally developed by Jukka's team at Silo AI - free for any use and installable in any environment from laptops to supercomputing: https://github.com/OSS-MLOPS-PLATFORM/oss-mlops-platform Jukka's new company: https://8wave.ai ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [ https://go.mlops.community/slack ] Follow us on X/Twitter [ @mlopscommunity ]( https://x.com/mlopscommunity ) or [LinkedIn]( https://go.mlops.community/linkedin )] Sign up for the next meetup: [ https://go.mlops.community/register ] MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Jukka on LinkedIn: /jukka-remes Timestamps: [00:00] Jukka's preferred coffee [00:39] Open-Source Platform Benefits [01:56] Silo MLOps Platform Explanation [05:18] AI Model Production Processes [10:42] AI Platform Use Cases [16:54] Reproducibility in Research Models [26:51] Pipeline setup automation [33:26] MLOps Adoption Journey [38:31] EU AI Act and Open Source [41:38] MLOps and 8wave AI [45:46] Optimizing Cross-Stakeholder Collaboration [52:15] Open Source ML Platform [55:06] Wrap up…
 
Tecton⁠ Founder and CEO Mike Del Balso talks about what ML/AI use cases are core components generating Millions in revenue. Demetrios and Mike go through the maturity curve that predictive Machine Learning use cases have gone through over the past 5 years, and why a feature store is a primary component of an ML stack. // Bio Mike Del Balso is the CEO and co-founder of Tecton, where he’s building the industry’s first feature platform for real-time ML. Before Tecton, Mike co-created the Uber Michelangelo ML platform. He was also a product manager at Google where he managed the core ML systems that power Google’s Search Ads business. He studied Applied Science, Electrical & Computer Engineering at the University of Toronto. // Related Links Website: ⁠www.tecton.ai⁠ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: ⁠https://go.mlops.community/TYExplore⁠ MLOps Swag/Merch: [ ⁠https://shop.mlops.community/⁠ ] Connect with Demetrios on LinkedIn: ⁠/dpbrinkm⁠ Connect with Mike on LinkedIn: ⁠/michaeldelbalso⁠ Timestamps: [00:00] Smarter decisions, less manual work [03:52] Data pipelines: pain and fixes [08:45] Why Tecton was born [11:30] ML use cases shift [14:14] Models for big bets [18:39] Build or buy drama [20:20] Fintech's data playbook [23:52] What really needs real-time [28:07] Speeding up ML delivery [32:09] Valuing ML is tricky [35:29] Simplifying ML toolkits [37:18] AI copilots in action [42:13] AI that fights fraud [45:07] Teaming up across coasts [46:43] Tecton + Generative AI?…
 
Raza Habib, the CEO of LLM Eval platform Humanloop , talks to us about how to make your AI products more accurate and reliable by shortening the feedback loop of your evals. Quickly iterating on prompts and testing what works, along with some of his favorite Dario from Anthropic AI Quotes. // Bio Raza is the CEO and Co-founder at Humanloop. He has a PhD in Machine Learning from UCL, was the founding engineer of Monolith AI, and has built speech systems at Google. For the last 4 years, he has led Humanloop and supported leading technology companies such as Duolingo, Vanta, and Gusto to build products with large language models. Raza was featured in the Forbes 30 Under 30 technology list in 2022, and Sifted recently named him one of the most influential Gen AI founders in Europe. // Related Links Websites: https://humanloop.com ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Raza on LinkedIn: /humanloop-raza Timestamps: [00:00] Cracking Open System Failures and How We Fix Them [05:44] LLMs in the Wild — First Steps and Growing Pains [08:28] Building the Backbone of Tracing and Observability [13:02] Tuning the Dials for Peak Model Performance [13:51] From Growing Pains to Glowing Gains in AI Systems [17:26] Where Prompts Meet Psychology and Code [22:40] Why Data Experts Deserve a Seat at the Table [24:59] Humanloop and the Art of Configuration Taming [28:23] What Actually Matters in Customer-Facing AI [33:43] Starting Fresh with Private Models That Deliver [34:58] How LLM Agents Are Changing the Way We Talk [39:23] The Secret Lives of Prompts Inside Frameworks [42:58] Streaming Showdowns — Creativity vs. Convenience [46:26] Meet Our Auto-Tuning AI Prototype [49:25] Building the Blueprint for Smarter AI [51:24] Feedback Isn’t Optional — It’s Everything…
 
Getting AI Apps Past the Demo // MLOps Podcast #319 with Vaibhav Gupta, CEO of BoundaryML. Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // Abstract It's been two years, and we still seem to see AI disproportionately more in demos than production features. Why? And how can we apply engineering practices we've all learned in the past decades to our advantage here? // Bio Vaibhav is one of the creators of BAML and a YC alum. He spent 10 years in AI performance optimization at places like Google, Microsoft, and D.E. Shaw. He loves diving deep and chatting about anything related to Gen AI and Computer Vision! // Related Links Website: https://www.boundaryml.com/ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [ https://go.mlops.community/slack ] Follow us on X/Twitter [ @mlopscommunity ]( https://x.com/mlopscommunity ) or [LinkedIn]( https://go.mlops.community/linkedin )] Sign up for the next meetup: [ https://go.mlops.community/register ] MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Vaibhav on LinkedIn: /vaigup Timestamps: [00:00] Vaibhav's preferred coffee [00:38] What is BAML [03:07] LangChain Overengineering Issues [06:46] Verifiable English Explained [11:45] Python AI Integration Challenges [15:16] Strings as First-Class Code [21:45] Platform Gap in Development [30:06] Workflow Efficiency Tools [33:10] Surprising BAML Insights [40:43] BAML Cool Projects [45:54] BAML Developer Conversations [48:39] Wrap up…
 
Demetrios and Mohan Atreya break down the GPU madness behind AI — from supply headaches and sky-high prices to the rise of nimble GPU clouds trying to outsmart the giants. They cover power-hungry hardware, failed experiments, and how new cloud models are shaking things up with smarter provisioning, tokenized access, and a whole lotta hustle. It's a wild ride through the guts of AI infrastructure — fun, fast, and full of sparks! Big thanks to the folks at Rafay for backing this episode — appreciate the support in making these conversations happen! // Bio Mohan is a seasoned and innovative product leader currently serving as the Chief Product Officer at Rafay Systems. He has led multi-site teams and driven product strategy at companies like Okta, Neustar, and McAfee. // Related Links Websites: https://rafay.co/ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Mohan on LinkedIn: /mohanatreya Timestamps: [00:00] AI/ML Customer Challenges [04:21] Dependency on Microsoft for Revenue [09:08] Challenges of Hypothesis in AI/ML [12:17] Neo Cloud Onboarding Challenges [15:02] Elastic GPU Cloud Automation [19:11] Dynamic GPU Inventory Management [20:25] Terraform Lacks Inventory Awareness [26:42] Onboarding and End-User Experience Strategies [29:30] Optimizing Storage for Data Efficiency [33:38] Pizza Analogy: User Preferences [35:18] Token-Based GPU Cloud Monetization [39:01] Empowering Citizen Scientists with AI [42:31] Innovative CFO Chatbot Solutions [47:09] Cloud Services Need Spectrum…
 
Demetrios, Sam Partee, and Rahul Parundekar unpack the chaos of AI agent tools and the evolving world of MCP (Model Context Protocol). With sharp insights and plenty of laughs, they dig into tool permissions, security quirks, agent memory, and the messy path to making agents actually useful. // Bio Sam Partee Sam Partee is the CTO and Co-Founder of Arcade AI. Previously a Principal Engineer leading the Applied AI team at Redis, Sam led the effort in creating the ecosystem around Redis as a vector database. He is a contributor to multiple OSS projects including Langchain, DeterminedAI, LlamaIndex and Chapel amongst others. While at Cray/HPE he created the SmartSim AI framework which is now used at national labs around the country to integrate HPC simulations like climate models with AI. Rahul Parundekar Rahul Parundekar is the founder of AI Hero. He graduated with a Master's in Computer Science from USC Los Angeles in 2010, and embarked on a career focused on Artificial Intelligence. From 2010-2017, he worked as a Senior Researcher at Toyota ITC working on agent autonomy within vehicles. His journey continued as the Director of Data Science at FigureEight (later acquired by Appen), where he and his team developed an architecture supporting over 36 ML models and managing over a million predictions daily. Since 2021, he has been working on AI Hero, aiming to democratize AI access, while also consulting on LLMOps(Large Language Model Operations), and AI system scalability. Other than his full time role as a founder, he is also passionate about community engagement, and actively organizes MLOps events in SF, and contributes educational content on RAG and LLMOps at learn.mlops.community. // Related Links Websites: arcade.dev // aihero.studio ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Rahul on LinkedIn: /rparundekar Connect with Sam on LinkedIn: /sampartee Timestamps: [00:00] Agents & Tools, Explained (Without Melting Your Brain) [09:51] MVP Servers: Why Everything’s on Fire (and How to Fix It) [13:18] Can We Actually Trust the Protocol? [18:13] KYC, But Make It AI (and Less Painful) [25:25] Web Automation Tests: The Bugs Strike Back [28:18] MCP Dev: What Went Wrong (and What Saved Us) [33:53] Social Login: One Button to Rule Them All [39:33] What Even Is an AI-Native Developer? [42:21] Betting Big on Smarter Models (High Risk, High Reward) [51:40] Harrison’s Bold New Tactic (With Real-Life Magic Tricks) [55:31] Async Task Handoffs: Herding Cats, But Digitally [1:00:37] Getting AI to Actually Help Your Workflow [1:03:53] The Infamous Varma System Error (And How We Dodge It)…
 
AI in M&A: Building, Buying, and the Future of Dealmaking // MLOps Podcast #315 with Kison Patel, CEO and M&A Science at DealRoom . Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // Abstract The intersection of M&A and AI, exploring how the DealRoom team developed AI capabilities and the practical use cases of AI in dealmaking. Discuss the evolving landscape of AI-driven M&A, the factors that make AI companies attractive acquisition targets, and the key indicators of success in this space. // Bio Kison Patel is the Founder and CEO of DealRoom, an M&A lifecycle management platform designed for buyer-led M&A and recognized twice on the Inc. 5000 Fastest Growing Companies list. He also founded M&A Science, a global community offering courses, events, and the top-rated M&A Science podcast with over 2.25 million downloads. Through the podcast, Kison shares actionable insights from top M&A experts, helping professionals modernize their approach to deal-making. He is also the author of *Agile M&A: Proven Techniques to Close Deals Faster and Maximize Value*, a guide to tech-enabled, adaptive M&A practices. Kison is dedicated to disrupting traditional M&A with innovative tools and education, empowering teams to drive greater efficiency and value. // Related Links Website: https://dealroom.net https://www.mascience.com ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [ @mlopscommunity ]( https://x.com/mlopscommunity ) or [LinkedIn]( https://go.mlops.community/linkedin )] Sign up for the next meetup: [ https://go.mlops.community/register ] MLOps Swag/Merch: [ https://shop.mlops.community/ ] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Kison on LinkedIn: /kisonpatel Timestamps: [00:00] Kison's preferred coffee [00:11] Takeaways [00:40] Founders Journey Slumps [05:07] Jira for M&A [10:57] Overcoming Idea Paralysis [14:32] Customer-led Discovery Success [22:20] Legal Fees in Deals [26:24] Data Room Differentiators [29:26] PLG vs Sales Teams [31:43] AI Pricing Strategies [35:15] PLG AI Cost Optimization [40:53] Building AI Teams [47:40] Great Companies Are Bought [51:10] M&A Failures and Fever [54:23] Wrap up…
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play