Artwork

Content provided by Neil C. Hughes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Neil C. Hughes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

3280: Yobi and the Future of Ethical AI at Scale

32:28
 
Share
 

Manage episode 483215880 series 80936
Content provided by Neil C. Hughes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Neil C. Hughes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What if companies could tap into powerful behavioral AI without compromising user privacy or crossing legal lines? In this episode of Tech Talks Daily, I sit down with Frank Portman, CTO of Yobi, to explore how his team is building foundation models grounded in real user behavior, backed by ethically sourced and consented data.

Frank shares how Yobi is taking a distinct approach. They're not building large language models or racing to dominate generative AI headlines. Instead, they're focused on data integrity, transparency, and security from day one. With a strategic partnership with Microsoft Azure, Yobi delivers models that run directly within a customer’s environment. That means privacy is preserved, data stays protected, and companies still benefit from intelligent, adaptive systems.

We unpack how Yobi avoids risky use cases like financial underwriting and healthcare, how their models are trained to avoid demographic bias, and why they actively reward systems for being bad at guessing personal traits. This isn’t just about compliance. It’s about designing products that work better because they’re built responsibly.

Frank also opens up about Yobi’s internal culture, his belief in first-principles thinking, and how empowering engineers to "place bets" drives innovation. He offers insight into what the AI industry must learn quickly from recent missteps, including data misuse and growing public skepticism.

If you're exploring AI solutions and wondering how to build or buy systems that scale without cutting ethical corners, this conversation delivers clarity, honesty, and direction. Are you ready to rethink what responsible AI should look like inside your company?

  continue reading

2045 episodes

Artwork
iconShare
 
Manage episode 483215880 series 80936
Content provided by Neil C. Hughes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Neil C. Hughes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What if companies could tap into powerful behavioral AI without compromising user privacy or crossing legal lines? In this episode of Tech Talks Daily, I sit down with Frank Portman, CTO of Yobi, to explore how his team is building foundation models grounded in real user behavior, backed by ethically sourced and consented data.

Frank shares how Yobi is taking a distinct approach. They're not building large language models or racing to dominate generative AI headlines. Instead, they're focused on data integrity, transparency, and security from day one. With a strategic partnership with Microsoft Azure, Yobi delivers models that run directly within a customer’s environment. That means privacy is preserved, data stays protected, and companies still benefit from intelligent, adaptive systems.

We unpack how Yobi avoids risky use cases like financial underwriting and healthcare, how their models are trained to avoid demographic bias, and why they actively reward systems for being bad at guessing personal traits. This isn’t just about compliance. It’s about designing products that work better because they’re built responsibly.

Frank also opens up about Yobi’s internal culture, his belief in first-principles thinking, and how empowering engineers to "place bets" drives innovation. He offers insight into what the AI industry must learn quickly from recent missteps, including data misuse and growing public skepticism.

If you're exploring AI solutions and wondering how to build or buy systems that scale without cutting ethical corners, this conversation delivers clarity, honesty, and direction. Are you ready to rethink what responsible AI should look like inside your company?

  continue reading

2045 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play