Artwork

Content provided by The Deeper Thinking Podcast. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Deeper Thinking Podcast or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Philosophy Didn’t Just Eat AI. It Wrote Its Code — and It’s Hungry for Meaning. - The Deeper Thinking Podcast

20:16
 
Share
 

Manage episode 483849057 series 3604075
Content provided by The Deeper Thinking Podcast. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Deeper Thinking Podcast or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Philosophy Didn’t Just Eat AI. It Wrote Its Code — and It’s Hungry for Meaning

An epistemic meditation on artificial intelligence as a philosophical actor—and the urgency of restoring meaning, not just function, to systems that now decide for us.

What does your AI system believe? In this episode, we expand on Michael Schrage and David Kiron’s MIT Sloan thesis, Philosophy Eats AI. We trace how systems built on machine logic inevitably encode assumptions about purpose, knowledge, and reality. This episode reframes AI not as infrastructure—but as worldview. A tool that doesn’t just compute, but commits.

This is a quiet engagement with how leadership itself must evolve. With reflections drawn from Gregory Bateson, Karen Barad, Michel Foucault, and Heinz von Foerster, we introduce the idea of synthetic judgment: the emerging ability to interpret, audit, and question what our systems silently believe on our behalf.

Reflections

  • Every AI model has a philosophy. Most organizations don’t know what it is.
  • Leadership now requires ontological fluency—what your systems can and can’t see defines your future.
  • AI doesn’t just support judgment. It simulates it—often without your permission.
  • The most dangerous AI systems aren’t wrong. They’re coherent in ways you never intended.
  • To govern AI well, you need to understand what kind of knowing it performs.
  • Synthetic judgment isn’t human vs machine. It’s the ability to remain critical inside coordination.

Why Listen?

  • Learn how AI systems enact hidden worldviews about purpose and value
  • Explore teleology, epistemology, and ontology as business infrastructure
  • Understand how synthetic judgment can be cultivated as a leadership skill
  • Engage with thinkers who saw long ago what AI now makes urgent

Listen On:

Support This Work

Support future episodes by visiting buymeacoffee.com/thedeeperthinkingpodcast or leaving a review on Apple Podcasts. Thank you.

Bibliography

  • Barad, Karen. Meeting the Universe Halfway. Duke University Press, 2007.
  • Bateson, Gregory. Steps to an Ecology of Mind. University of Chicago Press, 2000.
  • Bostrom, Nick. Superintelligence. Oxford University Press, 2014.
  • Crawford, Kate. Atlas of AI. Yale University Press, 2021.
  • Eubanks, Virginia. Automating Inequality. St. Martin’s Press, 2018.
  • Floridi, Luciano. The Logic of Information. Oxford University Press, 2019.
  • Foucault, Michel. The Order of Things. Vintage, 1994.
  • Harari, Yuval Noah. Homo Deus. Harvill Secker, 2016.
  • Kelleher, John D., and Brendan Tierney. Data Science. MIT Press, 2018.
  • Marcus, Gary, and Ernest Davis. Rebooting AI. Pantheon, 2019.
  • Mitchell, Melanie. Artificial Intelligence. Farrar, Straus and Giroux, 2019.
  • Morozov, Evgeny. To Save Everything, Click Here. PublicAffairs, 2013.
  • Noble, Safiya Umoja. Algorithms of Oppression. NYU Press, 2018.
  • Schrage, Michael, and David Kiron. Philosophy Eats AI. MIT Sloan Management Review, 2025.
  • von Foerster, Heinz. Understanding Understanding. Springer, 2003.
  • Wolfram, Stephen. “How to Think Computationally About AI.” 2023.
  • Zuboff, Shoshana. The Age of Surveillance Capitalism. PublicAffairs, 2019.

To design AI is to author a worldview. To lead with it is to be answerable for what it sees—and what it cannot.

#PhilosophyEatsAI #SyntheticJudgment #Ontology #GregoryBateson #MichaelSchrage #David Kiron #KarenBarad #Foucault #vonFoerster #AIethics #MITSMR #Leadership #AIphilosophy #DeeperThinkingPodcast

  continue reading

226 episodes

Artwork
iconShare
 
Manage episode 483849057 series 3604075
Content provided by The Deeper Thinking Podcast. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Deeper Thinking Podcast or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Philosophy Didn’t Just Eat AI. It Wrote Its Code — and It’s Hungry for Meaning

An epistemic meditation on artificial intelligence as a philosophical actor—and the urgency of restoring meaning, not just function, to systems that now decide for us.

What does your AI system believe? In this episode, we expand on Michael Schrage and David Kiron’s MIT Sloan thesis, Philosophy Eats AI. We trace how systems built on machine logic inevitably encode assumptions about purpose, knowledge, and reality. This episode reframes AI not as infrastructure—but as worldview. A tool that doesn’t just compute, but commits.

This is a quiet engagement with how leadership itself must evolve. With reflections drawn from Gregory Bateson, Karen Barad, Michel Foucault, and Heinz von Foerster, we introduce the idea of synthetic judgment: the emerging ability to interpret, audit, and question what our systems silently believe on our behalf.

Reflections

  • Every AI model has a philosophy. Most organizations don’t know what it is.
  • Leadership now requires ontological fluency—what your systems can and can’t see defines your future.
  • AI doesn’t just support judgment. It simulates it—often without your permission.
  • The most dangerous AI systems aren’t wrong. They’re coherent in ways you never intended.
  • To govern AI well, you need to understand what kind of knowing it performs.
  • Synthetic judgment isn’t human vs machine. It’s the ability to remain critical inside coordination.

Why Listen?

  • Learn how AI systems enact hidden worldviews about purpose and value
  • Explore teleology, epistemology, and ontology as business infrastructure
  • Understand how synthetic judgment can be cultivated as a leadership skill
  • Engage with thinkers who saw long ago what AI now makes urgent

Listen On:

Support This Work

Support future episodes by visiting buymeacoffee.com/thedeeperthinkingpodcast or leaving a review on Apple Podcasts. Thank you.

Bibliography

  • Barad, Karen. Meeting the Universe Halfway. Duke University Press, 2007.
  • Bateson, Gregory. Steps to an Ecology of Mind. University of Chicago Press, 2000.
  • Bostrom, Nick. Superintelligence. Oxford University Press, 2014.
  • Crawford, Kate. Atlas of AI. Yale University Press, 2021.
  • Eubanks, Virginia. Automating Inequality. St. Martin’s Press, 2018.
  • Floridi, Luciano. The Logic of Information. Oxford University Press, 2019.
  • Foucault, Michel. The Order of Things. Vintage, 1994.
  • Harari, Yuval Noah. Homo Deus. Harvill Secker, 2016.
  • Kelleher, John D., and Brendan Tierney. Data Science. MIT Press, 2018.
  • Marcus, Gary, and Ernest Davis. Rebooting AI. Pantheon, 2019.
  • Mitchell, Melanie. Artificial Intelligence. Farrar, Straus and Giroux, 2019.
  • Morozov, Evgeny. To Save Everything, Click Here. PublicAffairs, 2013.
  • Noble, Safiya Umoja. Algorithms of Oppression. NYU Press, 2018.
  • Schrage, Michael, and David Kiron. Philosophy Eats AI. MIT Sloan Management Review, 2025.
  • von Foerster, Heinz. Understanding Understanding. Springer, 2003.
  • Wolfram, Stephen. “How to Think Computationally About AI.” 2023.
  • Zuboff, Shoshana. The Age of Surveillance Capitalism. PublicAffairs, 2019.

To design AI is to author a worldview. To lead with it is to be answerable for what it sees—and what it cannot.

#PhilosophyEatsAI #SyntheticJudgment #Ontology #GregoryBateson #MichaelSchrage #David Kiron #KarenBarad #Foucault #vonFoerster #AIethics #MITSMR #Leadership #AIphilosophy #DeeperThinkingPodcast

  continue reading

226 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play