Artwork

Content provided by Uri Gal and Sean Hansen, Uri Gal, and Sean Hansen. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Uri Gal and Sean Hansen, Uri Gal, and Sean Hansen or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI unleashed: The Moral Dimension of Generative Intelligence

1:33:36
 
Share
 

Manage episode 367046128 series 3459484
Content provided by Uri Gal and Sean Hansen, Uri Gal, and Sean Hansen. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Uri Gal and Sean Hansen, Uri Gal, and Sean Hansen or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
We finally break the seal on a discussion of generative artificial intelligence (AI). Since the launch of ChatGPT at the end of 2022, the potential of generative AI for good and for ill has dominated technology speculation across the globe. In this episode, we explore several of the ethical dimensions of generative AI (while conceding that such dimensions are almost unlimited). Specific topics discussed include: The evidence of political bias in generative AI systems The inconsistent moral argumentation of generative AI systems Can generative AI take the place of human actors in cognitive science research (and would we want it to)? Is the potential for personalized persuasion via generative AI a boon for the field of marketing or a dangerous path toward societal thought control? Is there such a thing as a non-Irish Limerick? Research discussed includes the following studies: Dillion, D., Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants? Trends in Cognitive Sciences. Krügel, S., Ostermaier, A., & Uhl, M. (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports, 13(1), 1–5. Matz, S., Teeny, J., Vaid, S. S., Harari, G. M., & Cerf, M. (2023). The Potential of Generative AI for Personalized Persuasion at Scale. PsyArXiv; PsyArXiv. McGee, R. W. (2023). Is ChatGPT biased against conservatives? An empirical study. SSRN. Rozado, D. (2023). The Political Biases of ChatGPT. Social Sciences, 12(3), 148.
  continue reading

21 episodes

Artwork
iconShare
 
Manage episode 367046128 series 3459484
Content provided by Uri Gal and Sean Hansen, Uri Gal, and Sean Hansen. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Uri Gal and Sean Hansen, Uri Gal, and Sean Hansen or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
We finally break the seal on a discussion of generative artificial intelligence (AI). Since the launch of ChatGPT at the end of 2022, the potential of generative AI for good and for ill has dominated technology speculation across the globe. In this episode, we explore several of the ethical dimensions of generative AI (while conceding that such dimensions are almost unlimited). Specific topics discussed include: The evidence of political bias in generative AI systems The inconsistent moral argumentation of generative AI systems Can generative AI take the place of human actors in cognitive science research (and would we want it to)? Is the potential for personalized persuasion via generative AI a boon for the field of marketing or a dangerous path toward societal thought control? Is there such a thing as a non-Irish Limerick? Research discussed includes the following studies: Dillion, D., Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants? Trends in Cognitive Sciences. Krügel, S., Ostermaier, A., & Uhl, M. (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports, 13(1), 1–5. Matz, S., Teeny, J., Vaid, S. S., Harari, G. M., & Cerf, M. (2023). The Potential of Generative AI for Personalized Persuasion at Scale. PsyArXiv; PsyArXiv. McGee, R. W. (2023). Is ChatGPT biased against conservatives? An empirical study. SSRN. Rozado, D. (2023). The Political Biases of ChatGPT. Social Sciences, 12(3), 148.
  continue reading

21 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play