Artwork

Content provided by Jason Edwards. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jason Edwards or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 35 — Transparency and Explainability

31:10
 
Share
 

Manage episode 505486186 series 3689029
Content provided by Jason Edwards. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jason Edwards or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

AI systems are powerful, but when their outputs cannot be understood, they risk losing trust. This episode explores transparency and explainability as core qualities for responsible AI. We begin by distinguishing between transparency — openness about how systems are designed and trained — and explainability, which focuses on how specific decisions or predictions are made. White-box models like decision trees and linear regression are contrasted with black-box systems like deep neural networks, which achieve high accuracy but resist easy interpretation. Post-hoc techniques such as LIME and SHAP are introduced as tools for interpreting complex models, while documentation practices like model cards and datasheets add accountability.

We also consider why explainability matters in practice. In healthcare, clinicians need to understand AI recommendations for patient safety. In finance, lending models must be explainable to comply with laws that protect consumers from discrimination. In government, algorithmic decisions that affect rights and opportunities must be transparent to uphold democratic accountability. Challenges include balancing interpretability with performance, ensuring explanations are meaningful to non-technical users, and avoiding superficial “explanations” that obscure deeper problems. By the end, listeners will understand that transparency and explainability are not optional extras — they are prerequisites for building AI systems that are trustworthy, auditable, and aligned with human values. Produced by BareMetalCyber.com, where you’ll find more cyber prepcasts, books, and information to strengthen your certification path.

  continue reading

48 episodes

Artwork
iconShare
 
Manage episode 505486186 series 3689029
Content provided by Jason Edwards. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jason Edwards or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

AI systems are powerful, but when their outputs cannot be understood, they risk losing trust. This episode explores transparency and explainability as core qualities for responsible AI. We begin by distinguishing between transparency — openness about how systems are designed and trained — and explainability, which focuses on how specific decisions or predictions are made. White-box models like decision trees and linear regression are contrasted with black-box systems like deep neural networks, which achieve high accuracy but resist easy interpretation. Post-hoc techniques such as LIME and SHAP are introduced as tools for interpreting complex models, while documentation practices like model cards and datasheets add accountability.

We also consider why explainability matters in practice. In healthcare, clinicians need to understand AI recommendations for patient safety. In finance, lending models must be explainable to comply with laws that protect consumers from discrimination. In government, algorithmic decisions that affect rights and opportunities must be transparent to uphold democratic accountability. Challenges include balancing interpretability with performance, ensuring explanations are meaningful to non-technical users, and avoiding superficial “explanations” that obscure deeper problems. By the end, listeners will understand that transparency and explainability are not optional extras — they are prerequisites for building AI systems that are trustworthy, auditable, and aligned with human values. Produced by BareMetalCyber.com, where you’ll find more cyber prepcasts, books, and information to strengthen your certification path.

  continue reading

48 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play