Artwork

Content provided by Demetrios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Demetrios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

A Blueprint for Scalable & Reliable Enterprise AI/ML Systems // Panel // AIQCON

35:38
 
Share
 

Manage episode 430856921 series 3241972
Content provided by Demetrios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Demetrios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This is a Panel taken from the recent AI Quality Conference presented by the MLOps COmmunity and Kolena

// Abstract Enterprise AI leaders continue to explore the best productivity solutions that solve business problems, mitigate risks, and increase efficiency. Building reliable and secure AI/ML systems requires following industry standards, an operating framework, and best practices that can accelerate and streamline the scalable architecture that can produce expected business outcomes. This session, featuring veteran practitioners, focuses on building scalable, reliable, and quality AI and ML systems for the enterprises. // Panelists - Hira Dangol: VP, AI/ML and Automation @ Bank of America - Rama Akkiraju: VP, Enterprise AI/ML @ NVIDIA - Nitin Aggarwal: Head of AI Services @ Google - Steven Eliuk: VP, AI and Governance @ IBM A big thank you to our Premium Sponsors Google Cloud & Databricks for their generous support!

Timestamps:

00:00 Panelists discuss vision and strategy in AI

05:18 Steven Eliuk, IBM expertise in data services

07:30 AI as means to improve business metrics

11:10 Key metrics in production systems: efficiency and revenue

13:50 Consistency in data standards aids data integration

17:47 Generative AI presents new data classification risks

22:47 Evaluating implications, monitoring, and validating use cases

26:41 Evaluating natural language answers for efficient production

29:10 Monitoring AI models for performance and ethics

31:14 AI metrics and user responsibility for future models

34:56 Access to data is improving, promising progress

  continue reading

451 episodes

Artwork
iconShare
 
Manage episode 430856921 series 3241972
Content provided by Demetrios. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Demetrios or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This is a Panel taken from the recent AI Quality Conference presented by the MLOps COmmunity and Kolena

// Abstract Enterprise AI leaders continue to explore the best productivity solutions that solve business problems, mitigate risks, and increase efficiency. Building reliable and secure AI/ML systems requires following industry standards, an operating framework, and best practices that can accelerate and streamline the scalable architecture that can produce expected business outcomes. This session, featuring veteran practitioners, focuses on building scalable, reliable, and quality AI and ML systems for the enterprises. // Panelists - Hira Dangol: VP, AI/ML and Automation @ Bank of America - Rama Akkiraju: VP, Enterprise AI/ML @ NVIDIA - Nitin Aggarwal: Head of AI Services @ Google - Steven Eliuk: VP, AI and Governance @ IBM A big thank you to our Premium Sponsors Google Cloud & Databricks for their generous support!

Timestamps:

00:00 Panelists discuss vision and strategy in AI

05:18 Steven Eliuk, IBM expertise in data services

07:30 AI as means to improve business metrics

11:10 Key metrics in production systems: efficiency and revenue

13:50 Consistency in data standards aids data integration

17:47 Generative AI presents new data classification risks

22:47 Evaluating implications, monitoring, and validating use cases

26:41 Evaluating natural language answers for efficient production

29:10 Monitoring AI models for performance and ethics

31:14 AI metrics and user responsibility for future models

34:56 Access to data is improving, promising progress

  continue reading

451 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play