Artwork

Content provided by Jim Baxter. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jim Baxter or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

38. Should We Be Using AI to Predict Patient Preferences? With Nicholas Makins

43:53
 
Share
 

Manage episode 480725140 series 3459206
Content provided by Jim Baxter. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jim Baxter or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This episode is part of what's becoming a bit of an informal series of Ethics Untangled episodes, on ethical issues relating to artificial intelligence applications. The particular application we're looking at this time comes from a healthcare setting, and is called a Patient Preference Predictor. It's a proposed way of using an algorithmic system to predict what a patient's preferences would be concerning their healthcare, in situations where they're incapacitated and unable to tell us what their preferences are. Ethicists have raised concerns about these systems, and these concerns are worth taking seriously, but Dr Nick Makins, Postdoctoral Research Fellow in Philosophy at the University of Leeds, thinks they can be answered, and that the use of these systems can be justified, at least in some circumstances.

Book your place at our public event with Gavin Esler, "Dead Cats, Strategic Lying and Truth Decay", here.

Ethics Untangled is produced by IDEA, The Ethics Centre at the University of Leeds.
Bluesky: @ethicsuntangled.bsky.social
Facebook: https://www.facebook.com/ideacetl
LinkedIn: https://www.linkedin.com/company/idea-ethics-centre/

  continue reading

76 episodes

Artwork
iconShare
 
Manage episode 480725140 series 3459206
Content provided by Jim Baxter. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jim Baxter or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This episode is part of what's becoming a bit of an informal series of Ethics Untangled episodes, on ethical issues relating to artificial intelligence applications. The particular application we're looking at this time comes from a healthcare setting, and is called a Patient Preference Predictor. It's a proposed way of using an algorithmic system to predict what a patient's preferences would be concerning their healthcare, in situations where they're incapacitated and unable to tell us what their preferences are. Ethicists have raised concerns about these systems, and these concerns are worth taking seriously, but Dr Nick Makins, Postdoctoral Research Fellow in Philosophy at the University of Leeds, thinks they can be answered, and that the use of these systems can be justified, at least in some circumstances.

Book your place at our public event with Gavin Esler, "Dead Cats, Strategic Lying and Truth Decay", here.

Ethics Untangled is produced by IDEA, The Ethics Centre at the University of Leeds.
Bluesky: @ethicsuntangled.bsky.social
Facebook: https://www.facebook.com/ideacetl
LinkedIn: https://www.linkedin.com/company/idea-ethics-centre/

  continue reading

76 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play