Artwork

Content provided by Kensy Cooperrider and Kensy Cooperrider – Diverse Intelligences Summer Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kensy Cooperrider and Kensy Cooperrider – Diverse Intelligences Summer Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Science, AI, and illusions of understanding

59:47
 
Share
 

Manage episode 491046906 series 2784267
Content provided by Kensy Cooperrider and Kensy Cooperrider – Diverse Intelligences Summer Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kensy Cooperrider and Kensy Cooperrider – Diverse Intelligences Summer Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

AI will fundamentally transform science. It will supercharge the research process, making it faster and more efficient and broader in scope. It will make scientists themselves vastly more productive, more objective, maybe more creative. It will make many human participants—and probably some human scientists—obsolete… Or at least these are some of the claims we are hearing these days. There is no question that various AI tools could radically reshape how science is done, and how much science is done. What we stand to gain in all this is pretty clear. What we stand to lose is less obvious, but no less important.

My guest today is Dr. Molly Crockett. Molly is a Professor in the Department of Psychology and the University Center for Human Values at Princeton University. In a recent widely-discussed article, Molly and the anthropologist Dr. Lisa Messeri presented a framework for thinking about the different roles that are being imagined for AI in science. And they argue that, when we adopt AI in these ways, we become vulnerable to certain illusions.

Here, Molly and I talk about four visions of AI in science that are currently circulating: AI as an Oracle, as a Surrogate, as a Quant, and as an Arbiter. We talk about the very real problems in the scientific process that AI promises to help us solve. We consider the ethics and challenges of using Large Language Models as experimental subjects. We talk about three illusions of understanding the crop up when we uncritically adopt AI into the research pipeline—an illusion that we understand more than we actually do; an illusion that we're covering a larger swath of a research space than we actually are; and the illusion that AI makes our work more objective. We also talk about how ideas from Science and Technology Studies (or STS) can help us make sense of this AI-driven transformation that, like it or no, is already upon us. Along the way Molly and I touch on: AI therapists and AI tutors, anthropomorphism, the culture and ideology of Silicon Valley, Amazon's Mechanical Turk, fMRI, objectivity, quantification, Molly's mid-career crisis, monocultures, and the squishy parts of human experience.

Without further ado, on to my conversation with Dr. Molly Crockett. Enjoy!

A transcript of this episode will be posted soon.

Notes and links

5:00 – For more on LLMs—and the question of whether we understand how they work—see our earlier episode with Murray Shanahan.

9:00 – For the paper by Dr. Crockett and colleagues about the social/behavioral sciences and the COVID-19 pandemic, see here.

11:30 – For Dr. Crockett and colleagues’ work on outrage on social media, see this recent paper.

18:00 – For a recent exchange on the prospects of using LLMs in scientific peer review, see here.

20:30 – Donna Haraway’s essay, 'Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective’, is here. See also Dr. Haraway's book, Primate Visions.

22:00 – For the recent essay by Henry Farrell and others on AI as a cultural technology, see here.

23:00 – For a recent report on chatbots driving people to mental health crises, see here.

25:30 – For the already-classic “stochastic parrots” article, see here.

33:00 – For the study by Ryan Carlson and Dr. Crockett on using crowd-workers to study altruism, see here.

34:00 – For more on the “illusion of explanatory depth,” see our episode with Tania Lombrozo.

53:00 – For the more about Ohio State’s plans to incorporate AI in the classroom, see here. For a recent essay by Dr. Crockett on the idea of “techno-optimism,” see here.

Recommendations

More Everything Forever, by Adam Becker

Transformative Experience, by L. A. Paul

Epistemic Injustice, by Miranda Fricker

Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala.

Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here!

We welcome your comments, questions, and suggestions. Feel free to email us at: [email protected].

For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).

  continue reading

138 episodes

Artwork
iconShare
 
Manage episode 491046906 series 2784267
Content provided by Kensy Cooperrider and Kensy Cooperrider – Diverse Intelligences Summer Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kensy Cooperrider and Kensy Cooperrider – Diverse Intelligences Summer Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

AI will fundamentally transform science. It will supercharge the research process, making it faster and more efficient and broader in scope. It will make scientists themselves vastly more productive, more objective, maybe more creative. It will make many human participants—and probably some human scientists—obsolete… Or at least these are some of the claims we are hearing these days. There is no question that various AI tools could radically reshape how science is done, and how much science is done. What we stand to gain in all this is pretty clear. What we stand to lose is less obvious, but no less important.

My guest today is Dr. Molly Crockett. Molly is a Professor in the Department of Psychology and the University Center for Human Values at Princeton University. In a recent widely-discussed article, Molly and the anthropologist Dr. Lisa Messeri presented a framework for thinking about the different roles that are being imagined for AI in science. And they argue that, when we adopt AI in these ways, we become vulnerable to certain illusions.

Here, Molly and I talk about four visions of AI in science that are currently circulating: AI as an Oracle, as a Surrogate, as a Quant, and as an Arbiter. We talk about the very real problems in the scientific process that AI promises to help us solve. We consider the ethics and challenges of using Large Language Models as experimental subjects. We talk about three illusions of understanding the crop up when we uncritically adopt AI into the research pipeline—an illusion that we understand more than we actually do; an illusion that we're covering a larger swath of a research space than we actually are; and the illusion that AI makes our work more objective. We also talk about how ideas from Science and Technology Studies (or STS) can help us make sense of this AI-driven transformation that, like it or no, is already upon us. Along the way Molly and I touch on: AI therapists and AI tutors, anthropomorphism, the culture and ideology of Silicon Valley, Amazon's Mechanical Turk, fMRI, objectivity, quantification, Molly's mid-career crisis, monocultures, and the squishy parts of human experience.

Without further ado, on to my conversation with Dr. Molly Crockett. Enjoy!

A transcript of this episode will be posted soon.

Notes and links

5:00 – For more on LLMs—and the question of whether we understand how they work—see our earlier episode with Murray Shanahan.

9:00 – For the paper by Dr. Crockett and colleagues about the social/behavioral sciences and the COVID-19 pandemic, see here.

11:30 – For Dr. Crockett and colleagues’ work on outrage on social media, see this recent paper.

18:00 – For a recent exchange on the prospects of using LLMs in scientific peer review, see here.

20:30 – Donna Haraway’s essay, 'Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective’, is here. See also Dr. Haraway's book, Primate Visions.

22:00 – For the recent essay by Henry Farrell and others on AI as a cultural technology, see here.

23:00 – For a recent report on chatbots driving people to mental health crises, see here.

25:30 – For the already-classic “stochastic parrots” article, see here.

33:00 – For the study by Ryan Carlson and Dr. Crockett on using crowd-workers to study altruism, see here.

34:00 – For more on the “illusion of explanatory depth,” see our episode with Tania Lombrozo.

53:00 – For the more about Ohio State’s plans to incorporate AI in the classroom, see here. For a recent essay by Dr. Crockett on the idea of “techno-optimism,” see here.

Recommendations

More Everything Forever, by Adam Becker

Transformative Experience, by L. A. Paul

Epistemic Injustice, by Miranda Fricker

Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala.

Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here!

We welcome your comments, questions, and suggestions. Feel free to email us at: [email protected].

For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).

  continue reading

138 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play