Artwork

Content provided by Demetrios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Demetrios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Model Monitoring in Practice: Top Trends // Krishnaram Kenthapadi // MLOps Coffee Sessions #93

51:36
 
Share
 

Manage episode 325580184 series 3241972
Content provided by Demetrios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Demetrios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

MLOps Coffee Sessions #93 with Krishnaram Kenthapadi, Model Monitoring in Practice: Top Trends co-hosted by Mihail Eric
// Abstract
We first motivate the need for ML model monitoring, as part of a broader AI model governance and responsible AI framework, and provide a roadmap for thinking about model monitoring in practice.
We then present findings and insights on model monitoring in practice based on interviews with various ML practitioners spanning domains such as financial services, healthcare, hiring, online retail, computational advertising, and conversational assistants.
// Bio
Krishnaram Kenthapadi is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and ML monitoring platform. Previously, he was a Principal Scientist at Amazon AWS AI, where he led the fairness, explainability, privacy, and model understanding initiatives in the Amazon AI platform. Prior to joining Amazon, he led similar efforts at the LinkedIn AI team and served as LinkedIn’s representative on Microsoft’s AI and Ethics in Engineering and Research (AETHER) Advisory Board. Previously, he was a Researcher at Microsoft Research Silicon Valley Lab. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences, and co-chaired the 2014 ACM Symposium on Computing for Development. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference (MLADS). He has published 50+ papers, with 4500+ citations and filed 150+ patents (70 granted). He has presented tutorials on privacy, fairness, explainable AI, and responsible AI at forums such as KDD ’18 ’19, WSDM ’19, WWW ’19 ’20 '21, FAccT ’20 '21, AAAI ’20 '21, and ICML '21.
// MLOps Jobs board
https://mlops.pallet.xyz/jobs
// Related Links
Website: https://cs.stanford.edu/people/kngk/
https://sites.google.com/view/ResponsibleAITutorial
https://sites.google.com/view/explainable-ai-tutorial
https://sites.google.com/view/fairness-tutorial
https://sites.google.com/view/privacy-tutorial
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Mihail on LinkedIn: https://www.linkedin.com/in/mihaileric/
Connect with Krishnaram on LinkedIn: https://www.linkedin.com/in/krishnaramkenthapadi
Timestamps:
[00:00] Introduction to Krishnaram Kenthapadi
[02:22] Takeaways
[04:55] Thank you Fiddler AI for sponsoring this episode!
[05:15] Struggles in Explainable AI
[06:16] Attacking the problem of difficult models and architectures in Explainability
[08:30] Explainable AI prominence
[09:56] Importance of password manager and actual security
[14:27] Role of Education in Explainable AI systems
[18:52] Highly regulated domains in other sectors
[21:12] First machine learning wins
[23:36] Model monitoring
[25:35] Interests in ML monitoring and Explainability
[27:27] Future of Explainability in the wide range of ML models [29:57] Non-technical stakeholders' voice [33:54] Advice to ML practitioners to address organizational concerns [38:49] Ethically sourced data set [42:15] Crowd-sourced labor [43:35] Recommendations to organizations about their minimal explainable product [46:29] Tension in practice [50:09] Wrap up

  continue reading

442 episodes

Artwork
iconShare
 
Manage episode 325580184 series 3241972
Content provided by Demetrios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Demetrios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

MLOps Coffee Sessions #93 with Krishnaram Kenthapadi, Model Monitoring in Practice: Top Trends co-hosted by Mihail Eric
// Abstract
We first motivate the need for ML model monitoring, as part of a broader AI model governance and responsible AI framework, and provide a roadmap for thinking about model monitoring in practice.
We then present findings and insights on model monitoring in practice based on interviews with various ML practitioners spanning domains such as financial services, healthcare, hiring, online retail, computational advertising, and conversational assistants.
// Bio
Krishnaram Kenthapadi is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and ML monitoring platform. Previously, he was a Principal Scientist at Amazon AWS AI, where he led the fairness, explainability, privacy, and model understanding initiatives in the Amazon AI platform. Prior to joining Amazon, he led similar efforts at the LinkedIn AI team and served as LinkedIn’s representative on Microsoft’s AI and Ethics in Engineering and Research (AETHER) Advisory Board. Previously, he was a Researcher at Microsoft Research Silicon Valley Lab. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences, and co-chaired the 2014 ACM Symposium on Computing for Development. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference (MLADS). He has published 50+ papers, with 4500+ citations and filed 150+ patents (70 granted). He has presented tutorials on privacy, fairness, explainable AI, and responsible AI at forums such as KDD ’18 ’19, WSDM ’19, WWW ’19 ’20 '21, FAccT ’20 '21, AAAI ’20 '21, and ICML '21.
// MLOps Jobs board
https://mlops.pallet.xyz/jobs
// Related Links
Website: https://cs.stanford.edu/people/kngk/
https://sites.google.com/view/ResponsibleAITutorial
https://sites.google.com/view/explainable-ai-tutorial
https://sites.google.com/view/fairness-tutorial
https://sites.google.com/view/privacy-tutorial
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Mihail on LinkedIn: https://www.linkedin.com/in/mihaileric/
Connect with Krishnaram on LinkedIn: https://www.linkedin.com/in/krishnaramkenthapadi
Timestamps:
[00:00] Introduction to Krishnaram Kenthapadi
[02:22] Takeaways
[04:55] Thank you Fiddler AI for sponsoring this episode!
[05:15] Struggles in Explainable AI
[06:16] Attacking the problem of difficult models and architectures in Explainability
[08:30] Explainable AI prominence
[09:56] Importance of password manager and actual security
[14:27] Role of Education in Explainable AI systems
[18:52] Highly regulated domains in other sectors
[21:12] First machine learning wins
[23:36] Model monitoring
[25:35] Interests in ML monitoring and Explainability
[27:27] Future of Explainability in the wide range of ML models [29:57] Non-technical stakeholders' voice [33:54] Advice to ML practitioners to address organizational concerns [38:49] Ethically sourced data set [42:15] Crowd-sourced labor [43:35] Recommendations to organizations about their minimal explainable product [46:29] Tension in practice [50:09] Wrap up

  continue reading

442 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play