Show notes are at https://stevelitchfield.com/sshow/chat.html
…
continue reading
Content provided by Kent Bye. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kent Bye or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
#1563: Deconstructing AI Hype with “The AI Con” Authors Emily M. Bender and Alex Hanna
MP3•Episode home
Manage episode 484426108 series 76331
Content provided by Kent Bye. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kent Bye or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
We are in the middle of a hype cycle peak around AI as there are a lot of hyperbolic claims being made about the capabilities and performance of large-language models (LLMs). Computational Linguist Emily M. Bender and Sociologist Alex Hanna have been writing academic papers about the limitations of LLMs, as well as some of the more pernicious aspects of benchmark culture in machine learning, as well as documenting some of the many environmental, labor, and human rights harms from both the creation and deployment of these LLMs. Their book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want comprehensively deconstructs the many of the false promises of AI, the playbook for AI Hype, and the underlying dynamics of how AI is an automation technology designed to consolidate power. Their book unpacks so many vital parts of the Science and Technology Studies narrative around AI including: How big technology companies have been using AI as a marketing term to describe disparate technologies that have many limitations How we anthropomorphize AI tech from our concepts of intelligence How AI Boosting companies are devaluing what it means to be human in order to promote AI technology How AI Boosters and AI Doomers are two sides of the same coin of assuming that AI is all-powerful and completely inevitable How many of the harms and costs associated with the technology are often out-of-sight and out-of-mind. This book takes a critical look at these so-called AI technologies, deconstructs the language that we use as we talk about these automating technologies, breaks down the hype playbook of Big Tech, restores the relational quality of human intelligence that is often collapsed by AI. It also provides some really helpful questions to ask in order to interrogate the hyperbolic claims that we're hearing from AI boosters. We talk about all of this and more on today's episode, and I have a feeling that this is an episode that I'll be referring back to often. This is also the 100th Voices of VR podcast episode that explores the intersection of AI within the context of XR, and I expect to continue to cover how folks in the XR industry are using AI. Being in right relationship to every aspect of the economic, ethical & moral, social, labor, legal, and property rights dimensions of AI technologies is still an aspirational position. It's not impossible, but it is also not easy. But this conversation helps to frame a lot of the deeper questions that I will continue to have about AI. And Bender and Hanna also provide a lot of clues to the red flags of AI Hype, but also some of the core questions to ask that help to orient around these deeper ethical questions around AI. I've also been editing unpublished and vaulted episodes of the Voices of AI that I did with AI researchers at the International Joint Conference of Artificial Intelligence that I did back in 2016 and 2018 (as well as a couple of other conferences), and I'm hoping to relaunch the Voices of AI later this summer to look back at what researchers were saying about AI 7-9 years ago to give some important historical context that's often collapsed within the current days of AI Hype (SPOILER ALERT: this is not the first nor the last hype cycle that AI will have). I'll also be engaging within a Socratic Style Debate where I'll be mostly arguing critically against AI on the last day of AWE (Thursday, June 12th, 2:45p) after the Expo has closed down, and before the final session. So come check out a live debate with a couple of AI Boosters and an AI Doomer. Also look for an interview that I just recorded with Process Philosopher Matt Segall diving more into a Process-Relational Philosophy perspective on AI, intelligence, and consciousness coming here soon. Segall and I are going to explore an elemental approach to intelligence, which is based upon concepts that I explore in my elemental theory of presence talk. Intelligence, privacy,
…
continue reading
789 episodes
MP3•Episode home
Manage episode 484426108 series 76331
Content provided by Kent Bye. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kent Bye or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
We are in the middle of a hype cycle peak around AI as there are a lot of hyperbolic claims being made about the capabilities and performance of large-language models (LLMs). Computational Linguist Emily M. Bender and Sociologist Alex Hanna have been writing academic papers about the limitations of LLMs, as well as some of the more pernicious aspects of benchmark culture in machine learning, as well as documenting some of the many environmental, labor, and human rights harms from both the creation and deployment of these LLMs. Their book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want comprehensively deconstructs the many of the false promises of AI, the playbook for AI Hype, and the underlying dynamics of how AI is an automation technology designed to consolidate power. Their book unpacks so many vital parts of the Science and Technology Studies narrative around AI including: How big technology companies have been using AI as a marketing term to describe disparate technologies that have many limitations How we anthropomorphize AI tech from our concepts of intelligence How AI Boosting companies are devaluing what it means to be human in order to promote AI technology How AI Boosters and AI Doomers are two sides of the same coin of assuming that AI is all-powerful and completely inevitable How many of the harms and costs associated with the technology are often out-of-sight and out-of-mind. This book takes a critical look at these so-called AI technologies, deconstructs the language that we use as we talk about these automating technologies, breaks down the hype playbook of Big Tech, restores the relational quality of human intelligence that is often collapsed by AI. It also provides some really helpful questions to ask in order to interrogate the hyperbolic claims that we're hearing from AI boosters. We talk about all of this and more on today's episode, and I have a feeling that this is an episode that I'll be referring back to often. This is also the 100th Voices of VR podcast episode that explores the intersection of AI within the context of XR, and I expect to continue to cover how folks in the XR industry are using AI. Being in right relationship to every aspect of the economic, ethical & moral, social, labor, legal, and property rights dimensions of AI technologies is still an aspirational position. It's not impossible, but it is also not easy. But this conversation helps to frame a lot of the deeper questions that I will continue to have about AI. And Bender and Hanna also provide a lot of clues to the red flags of AI Hype, but also some of the core questions to ask that help to orient around these deeper ethical questions around AI. I've also been editing unpublished and vaulted episodes of the Voices of AI that I did with AI researchers at the International Joint Conference of Artificial Intelligence that I did back in 2016 and 2018 (as well as a couple of other conferences), and I'm hoping to relaunch the Voices of AI later this summer to look back at what researchers were saying about AI 7-9 years ago to give some important historical context that's often collapsed within the current days of AI Hype (SPOILER ALERT: this is not the first nor the last hype cycle that AI will have). I'll also be engaging within a Socratic Style Debate where I'll be mostly arguing critically against AI on the last day of AWE (Thursday, June 12th, 2:45p) after the Expo has closed down, and before the final session. So come check out a live debate with a couple of AI Boosters and an AI Doomer. Also look for an interview that I just recorded with Process Philosopher Matt Segall diving more into a Process-Relational Philosophy perspective on AI, intelligence, and consciousness coming here soon. Segall and I are going to explore an elemental approach to intelligence, which is based upon concepts that I explore in my elemental theory of presence talk. Intelligence, privacy,
…
continue reading
789 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.