Artwork

Content provided by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI Regulations: Trickling up, Pouring Down, or Nowhere to Be Seen?

44:34
 
Share
 

Manage episode 476931621 series 3625878
Content provided by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Who sets the rules for AI and who gets left behind? In this episode of Humanitarian Frontiers in AI, we’re joined by Gabrielle Tran, Senior Analyst at the Institute for Security and Technology (IST), and Richard Mathenge, Co-founder of Techworker Community Africa (TCA), to explore the global landscape of AI regulation and its humanitarian impact. From the hidden labor behind AI models to the ethical and political tensions in governance, this conversation unpacks the fragmented policies shaping AI’s future, from the EU’s AI Act to the U.S.'s decentralized approach. Richard sheds light on the underpaid, invisible workforce behind AI moderation and training, while Gabrielle examines the geopolitical power struggles in AI governance and whether global policies can ever align. We also tackle AI’s high-risk deployment in humanitarian work, the responsibilities of NGOs using AI in the Global South, and potential solutions like data trusts to safeguard vulnerable populations. If you care about the future of AI in humanitarian efforts, this episode breaks down the challenges, risks, and urgent questions shaping the path forward. Tune in to understand what’s at stake (and why it matters)!

Key Points From This Episode:

  • The hidden labor of AI: how AI models rely on underpaid human moderators.
  • AI ethics versus the ethics of AI and how ethical concerns are framed as technical fixes.
  • Insight into the sometimes murky origins of training datasets.
  • Contrasting the EU’s AI Act with America’s decentralized approach.
  • The risks of AI deployment in humanitarian work, particularly in crisis zones.
  • Accountability in AI supply chains: how new EU policies may enforce transparency.
  • Reasons that AI governance is a low priority in many African nations.
  • Why tech giants typically only comply with AI policy when it benefits them.
  • AI for surveillance versus humanitarian use: the double-edged sword of AI governance.
  • An introduction to the concept of data trusts to safeguard humanitarian AI data.
  • Ensuring informed consent for workers when building and monitoring AI tools.
  • The role of humanitarian organizations like the UN in enforcing “digital rights."
  • What goes into building an ethical future for AI in humanitarian work.

Links Mentioned in Today’s Episode:

Richard Mathenge on LinkedIn

Techworker Community Africa (TCA)

Gabrielle Tran on LinkedIn

Gabrielle Tran on X

Institute for Security and Technology (IST)

EU AI Act

National Institute of Standards and Technology (NIST)

AI Risk Management Framework (RMF)

Occupational Safety and Health Administration (OSHA)

The Alignment Problem

Nasim Motalebi
Nasim Motalebi on LinkedIn
Chris Hoffman on LinkedIn

Innovation Norway

  continue reading

9 episodes

Artwork
iconShare
 
Manage episode 476931621 series 3625878
Content provided by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Who sets the rules for AI and who gets left behind? In this episode of Humanitarian Frontiers in AI, we’re joined by Gabrielle Tran, Senior Analyst at the Institute for Security and Technology (IST), and Richard Mathenge, Co-founder of Techworker Community Africa (TCA), to explore the global landscape of AI regulation and its humanitarian impact. From the hidden labor behind AI models to the ethical and political tensions in governance, this conversation unpacks the fragmented policies shaping AI’s future, from the EU’s AI Act to the U.S.'s decentralized approach. Richard sheds light on the underpaid, invisible workforce behind AI moderation and training, while Gabrielle examines the geopolitical power struggles in AI governance and whether global policies can ever align. We also tackle AI’s high-risk deployment in humanitarian work, the responsibilities of NGOs using AI in the Global South, and potential solutions like data trusts to safeguard vulnerable populations. If you care about the future of AI in humanitarian efforts, this episode breaks down the challenges, risks, and urgent questions shaping the path forward. Tune in to understand what’s at stake (and why it matters)!

Key Points From This Episode:

  • The hidden labor of AI: how AI models rely on underpaid human moderators.
  • AI ethics versus the ethics of AI and how ethical concerns are framed as technical fixes.
  • Insight into the sometimes murky origins of training datasets.
  • Contrasting the EU’s AI Act with America’s decentralized approach.
  • The risks of AI deployment in humanitarian work, particularly in crisis zones.
  • Accountability in AI supply chains: how new EU policies may enforce transparency.
  • Reasons that AI governance is a low priority in many African nations.
  • Why tech giants typically only comply with AI policy when it benefits them.
  • AI for surveillance versus humanitarian use: the double-edged sword of AI governance.
  • An introduction to the concept of data trusts to safeguard humanitarian AI data.
  • Ensuring informed consent for workers when building and monitoring AI tools.
  • The role of humanitarian organizations like the UN in enforcing “digital rights."
  • What goes into building an ethical future for AI in humanitarian work.

Links Mentioned in Today’s Episode:

Richard Mathenge on LinkedIn

Techworker Community Africa (TCA)

Gabrielle Tran on LinkedIn

Gabrielle Tran on X

Institute for Security and Technology (IST)

EU AI Act

National Institute of Standards and Technology (NIST)

AI Risk Management Framework (RMF)

Occupational Safety and Health Administration (OSHA)

The Alignment Problem

Nasim Motalebi
Nasim Motalebi on LinkedIn
Chris Hoffman on LinkedIn

Innovation Norway

  continue reading

9 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play