Artwork

Content provided by The Alan Turing Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Alan Turing Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Defining AI safety

54:15
 
Share
 

Manage episode 449770301 series 2645410
Content provided by The Alan Turing Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Alan Turing Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Ed and David chat with Professor Ibrahim Habli, Research Director at the Centre for Assuring Autonomy in the University of York, and director of the UKRI Centre for Doctoral Training in Safe AI Systems. The conversation covers the topic of defining and contextualising AI safety and risk, given existence of existing safety practices from other industries. Ibrahim has collaborated with The Alan Turing Institute on the "Trustworthy and Ethical Assurance platform", or "TEA" for short, an open-source tool for developing and communicating structured assurance arguments to show how data science and AI tech adheres to ethical principles.

  continue reading

63 episodes

Artwork

Defining AI safety

The Turing Podcast

35 subscribers

published

iconShare
 
Manage episode 449770301 series 2645410
Content provided by The Alan Turing Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Alan Turing Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Ed and David chat with Professor Ibrahim Habli, Research Director at the Centre for Assuring Autonomy in the University of York, and director of the UKRI Centre for Doctoral Training in Safe AI Systems. The conversation covers the topic of defining and contextualising AI safety and risk, given existence of existing safety practices from other industries. Ibrahim has collaborated with The Alan Turing Institute on the "Trustworthy and Ethical Assurance platform", or "TEA" for short, an open-source tool for developing and communicating structured assurance arguments to show how data science and AI tech adheres to ethical principles.

  continue reading

63 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play