Artwork

Content provided by RadicalxChange Foundation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by RadicalxChange Foundation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Joe Edelman: Co-Founder of Meaning Alignment Institute

1:21:45
 
Share
 

Manage episode 454180556 series 2856681
Content provided by RadicalxChange Foundation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by RadicalxChange Foundation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What happens when artificial intelligence starts weighing in on our moral decisions? Matt Prewitt is joined by Meaning Alignment Institute co-founder Joe Edelman to explore this thought-provoking territory in examining how AI is already shaping our daily experiences and values through social media algorithms. They explore the tools developed to help individuals negotiate their values and the implications of AI in moral reasoning – venturing into compelling questions about human-AI symbiosis, the nature of meaningful experiences, and whether machines can truly understand what matters to us. For anyone intrigued by the future of human consciousness and decision-making in an AI-integrated world, this discussion opens up fascinating possibilities – and potential pitfalls – we may not have considered.

Links & References:

References:

Papers & posts mentioned

Bios:

Joe Edelman is a philosopher, sociologist, and entrepreneur whose work spans from theoretical philosophy to practical applications in technology and governance. He invented the meaning-based metrics used at CouchSurfing, Facebook, and Apple, and co-founded the Center for Humane Technology and the Meaning Alignment Institute. His biggest contribution is a definition of "human values" that's precise enough to create product metrics, aligned ML models, and values-based democratic structures.
Joe’s Social Links:

Matt Prewitt (he/him) is a lawyer, technologist, and writer. He is the President of the RadicalxChange Foundation.
Matt’s Social Links:

Production Credits:

This is a RadicalxChange Production.

Connect with RadicalxChange Foundation:

  continue reading

27 episodes

Artwork
iconShare
 
Manage episode 454180556 series 2856681
Content provided by RadicalxChange Foundation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by RadicalxChange Foundation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What happens when artificial intelligence starts weighing in on our moral decisions? Matt Prewitt is joined by Meaning Alignment Institute co-founder Joe Edelman to explore this thought-provoking territory in examining how AI is already shaping our daily experiences and values through social media algorithms. They explore the tools developed to help individuals negotiate their values and the implications of AI in moral reasoning – venturing into compelling questions about human-AI symbiosis, the nature of meaningful experiences, and whether machines can truly understand what matters to us. For anyone intrigued by the future of human consciousness and decision-making in an AI-integrated world, this discussion opens up fascinating possibilities – and potential pitfalls – we may not have considered.

Links & References:

References:

Papers & posts mentioned

Bios:

Joe Edelman is a philosopher, sociologist, and entrepreneur whose work spans from theoretical philosophy to practical applications in technology and governance. He invented the meaning-based metrics used at CouchSurfing, Facebook, and Apple, and co-founded the Center for Humane Technology and the Meaning Alignment Institute. His biggest contribution is a definition of "human values" that's precise enough to create product metrics, aligned ML models, and values-based democratic structures.
Joe’s Social Links:

Matt Prewitt (he/him) is a lawyer, technologist, and writer. He is the President of the RadicalxChange Foundation.
Matt’s Social Links:

Production Credits:

This is a RadicalxChange Production.

Connect with RadicalxChange Foundation:

  continue reading

27 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play