Artwork

Content provided by Jeff Wilser. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jeff Wilser or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Should AI Agents Be Trusted? The Problem and Solution, w/ Billions.Network CEO Evin McMullen

45:44
 
Share
 

Manage episode 484444955 series 3503527
Content provided by Jeff Wilser. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jeff Wilser or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What happens when an AI agent says something harmful, or makes a costly mistake? Who’s responsible—and how can we even know who the agent belongs to in the first place?

In this episode of AI-Curious, we talk with Evin McMullen, CEO and co-founder of Billions.Network, a startup building cryptographic trust infrastructure to verify the identity and accountability of AI agents and digital content.

We explore the unsettling rise of synthetic media and deepfakes, why identity verification is foundational to AI safety, and how platforms—not users—should be responsible for determining what’s real. Evin explains how Billions uses zero knowledge proofs to establish trust without compromising privacy, and offers a vision for a future where billions of AI agents operate transparently, under clear reputational and legal frameworks.

Along the way, we cover:

  • The problem with unverified AI agents (2:00)
  • Why 50% of online traffic is now bots—and why that matters (2:45)
  • The Air Canada chatbot legal fiasco (15:00)
  • The difference between chatbots and agentic AI (13:00)
  • What “identity” means in an AI-first internet (10:00)
  • Deepfakes, misinformation, and the limits of user responsibility (22:00)
  • Billions’ “deep trust” framework, explained (29:00)
  • How platforms can earn trust by verifying content authenticity (34:00)
  • Breaking news: Billions’ work with the European Commission (38:20)

This one dives deep into the infrastructure of digital trust—and why the future of AI may depend on getting this right.

Learn more: https://billions.network

🎧 Subscribe to AI-Curious:

Apple Podcasts
https://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308

Spotify
https://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b
YouTube
https://www.youtube.com/@jeffwilser

  continue reading

96 episodes

Artwork
iconShare
 
Manage episode 484444955 series 3503527
Content provided by Jeff Wilser. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jeff Wilser or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What happens when an AI agent says something harmful, or makes a costly mistake? Who’s responsible—and how can we even know who the agent belongs to in the first place?

In this episode of AI-Curious, we talk with Evin McMullen, CEO and co-founder of Billions.Network, a startup building cryptographic trust infrastructure to verify the identity and accountability of AI agents and digital content.

We explore the unsettling rise of synthetic media and deepfakes, why identity verification is foundational to AI safety, and how platforms—not users—should be responsible for determining what’s real. Evin explains how Billions uses zero knowledge proofs to establish trust without compromising privacy, and offers a vision for a future where billions of AI agents operate transparently, under clear reputational and legal frameworks.

Along the way, we cover:

  • The problem with unverified AI agents (2:00)
  • Why 50% of online traffic is now bots—and why that matters (2:45)
  • The Air Canada chatbot legal fiasco (15:00)
  • The difference between chatbots and agentic AI (13:00)
  • What “identity” means in an AI-first internet (10:00)
  • Deepfakes, misinformation, and the limits of user responsibility (22:00)
  • Billions’ “deep trust” framework, explained (29:00)
  • How platforms can earn trust by verifying content authenticity (34:00)
  • Breaking news: Billions’ work with the European Commission (38:20)

This one dives deep into the infrastructure of digital trust—and why the future of AI may depend on getting this right.

Learn more: https://billions.network

🎧 Subscribe to AI-Curious:

Apple Podcasts
https://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308

Spotify
https://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b
YouTube
https://www.youtube.com/@jeffwilser

  continue reading

96 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play