Artwork

Content provided by American Public Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by American Public Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI can't read the room

6:04
 
Share
 

Manage episode 479558976 series 3603730
Content provided by American Public Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by American Public Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Leyla Isik, a professor of cognitive science at Johns Hopkins University, is also a senior scientist on a new study looking at how good AI is at reading social cues. She and her research team took short videos of people doing things — two people chatting, two babies on a playmat, two people doing a synchronized skate routine — and showed them to human participants. After, they were asked them questions like, are these two communicating with each other? Are they communicating? Is it a positive or negative interaction? Then, they showed the same videos to over 350 open source AI models. (Which is a lot, though it didn't include all the latest and greatest ones out there.) Isik found that the AI models were a lot worse than humans at understanding what was going on. Marketplace’s Stephanie Hughes visited Isik at her lab in Johns Hopkins to discuss the findings.

  continue reading

151 episodes

Artwork

AI can't read the room

Marketplace Tech

0-10 subscribers

published

iconShare
 
Manage episode 479558976 series 3603730
Content provided by American Public Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by American Public Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Leyla Isik, a professor of cognitive science at Johns Hopkins University, is also a senior scientist on a new study looking at how good AI is at reading social cues. She and her research team took short videos of people doing things — two people chatting, two babies on a playmat, two people doing a synchronized skate routine — and showed them to human participants. After, they were asked them questions like, are these two communicating with each other? Are they communicating? Is it a positive or negative interaction? Then, they showed the same videos to over 350 open source AI models. (Which is a lot, though it didn't include all the latest and greatest ones out there.) Isik found that the AI models were a lot worse than humans at understanding what was going on. Marketplace’s Stephanie Hughes visited Isik at her lab in Johns Hopkins to discuss the findings.

  continue reading

151 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play