Artwork

Content provided by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

State of AI Risk with Peter Slattery

1:09:57
 
Share
 

Manage episode 477402899 series 3034739
Content provided by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Understanding AI Risks with Peter Slattery

In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Peter Slattery, behavioral scientist and lead researcher at MIT’s FutureTech lab, where he spearheads the groundbreaking AI Risk Repository project. Together, they dive into the complex and often overlooked risks of artificial intelligence—ranging from misinformation and malicious use to systemic failures and existential threats.

Peter shares the intellectual and emotional journey behind categorizing over 1,000 documented AI risks, how his team built a risk taxonomy from 17,000+ sources, and why shared understanding and behavioral science are critical for navigating the future of AI.

This one is a must-listen for anyone curious about AI safety, behavioral science, and the future of technology that’s moving faster than most of us can track.

--

LINKS:

  • Peter's LinkedIn Profile
  • MIT FutureTech Lab: futuretech.mit.edu
  • AI Risk Repository

--

Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at [email protected] or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠[email protected]⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

  continue reading

63 episodes

Artwork
iconShare
 
Manage episode 477402899 series 3034739
Content provided by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Understanding AI Risks with Peter Slattery

In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Peter Slattery, behavioral scientist and lead researcher at MIT’s FutureTech lab, where he spearheads the groundbreaking AI Risk Repository project. Together, they dive into the complex and often overlooked risks of artificial intelligence—ranging from misinformation and malicious use to systemic failures and existential threats.

Peter shares the intellectual and emotional journey behind categorizing over 1,000 documented AI risks, how his team built a risk taxonomy from 17,000+ sources, and why shared understanding and behavioral science are critical for navigating the future of AI.

This one is a must-listen for anyone curious about AI safety, behavioral science, and the future of technology that’s moving faster than most of us can track.

--

LINKS:

  • Peter's LinkedIn Profile
  • MIT FutureTech Lab: futuretech.mit.edu
  • AI Risk Repository

--

Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at [email protected] or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠[email protected]⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

  continue reading

63 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play