Artwork

Content provided by Virtualitics. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Virtualitics or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The Keystones of Responsible AI: Explainability and Visualizations

41:26
 
Share
 

Manage episode 332308569 series 3364353
Content provided by Virtualitics. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Virtualitics or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

The future is no-code or low-code, but as more and more companies implement AI solutions and enable non-technical users, how do you maintain accuracy and responsible use?

The answer: explainable AI paired with multi-dimensional visualizations. These combined provide a more complete picture of the story playing out in a predictive system and are a must for ensuring responsible AI applications.

Join Caitlin Bigsby, Head of Product Marketing, as she sits down with Virtualitics’ Co-Heads of AI, Sarthak Sahu, and Aakash Indurkhya to discuss ways to incorporate visualizations into the ML lifecycle and key considerations.

  continue reading

Chapters

1. Introduction (00:00:00)

2. The Importance of Exploration (00:09:54)

3. Exploring for More Responsible AI (00:16:45)

4. Exposing the Mechanisms of AI (00:22:37)

5. What Stakeholders need to know BEFORE (00:26:02)

6. Why Context for the User is Necessary (00:30:11)

7. Why Context Makes AI More Responsible (00:34:20)

46 episodes

Artwork
iconShare
 
Manage episode 332308569 series 3364353
Content provided by Virtualitics. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Virtualitics or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

The future is no-code or low-code, but as more and more companies implement AI solutions and enable non-technical users, how do you maintain accuracy and responsible use?

The answer: explainable AI paired with multi-dimensional visualizations. These combined provide a more complete picture of the story playing out in a predictive system and are a must for ensuring responsible AI applications.

Join Caitlin Bigsby, Head of Product Marketing, as she sits down with Virtualitics’ Co-Heads of AI, Sarthak Sahu, and Aakash Indurkhya to discuss ways to incorporate visualizations into the ML lifecycle and key considerations.

  continue reading

Chapters

1. Introduction (00:00:00)

2. The Importance of Exploration (00:09:54)

3. Exploring for More Responsible AI (00:16:45)

4. Exposing the Mechanisms of AI (00:22:37)

5. What Stakeholders need to know BEFORE (00:26:02)

6. Why Context for the User is Necessary (00:30:11)

7. Why Context Makes AI More Responsible (00:34:20)

46 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play