Artwork

Content provided by Christopher Lind. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Christopher Lind or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Stanford AI Research | Microsoft AI Agent Coworkers | Workday AI Bias Lawsuit | Military AI Goes Big

53:35
 
Share
 

Manage episode 489885033 series 3593966
Content provided by Christopher Lind. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Christopher Lind or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Happy Friday, everyone! This week I’m back to my usual four updates, and while they may seem disconnected on the surface, you’ll see some bigger threads running through them all.

All seem to indicate we’re outsourcing to AI faster than we can supervise, are layering automation on top of bias without addressing the root issues, and letting convenience override discernment in places that carry life-or-death stakes.

With that, let’s get into it.

Stanford’s AI Therapy Study Shows We’re Automating Harm

New research from Stanford tested how today’s top LLMs are handling crisis counseling, and the results are disturbing. From stigmatizing mental illness to recommending dangerous actions in crisis scenarios, these AI therapists aren’t just “not ready”… they are making things worse. I walk through what the study got right, where even its limitations point to deeper risk, and why human experience shouldn’t be replaced by synthetic empathy.

Microsoft Says You’ll Be Training AI Agents Soon, Like It or Not

In Microsoft’s new 2025 Work Trend Index, 41% of leaders say they expect their teams to be training AI agents in the next five years. And 36% believe they’ll be managing them. If you’re hearing “agent boss” and thinking “not my problem,” think again. This isn’t a future trend; it’s already happening. I break down what AI agents really are, how they’ll change daily work, and why organizations can’t just bolt them on without first measuring human readiness.

Workday’s Bias Lawsuit Could Reshape AI Hiring

Workday is being sued over claims that its hiring algorithms discriminated against candidates based on race, age, and disability status. But here’s the real issue: most companies can’t even explain how their AI hiring tools make decisions. I unpack why this lawsuit could set a critical precedent, how leaders should respond now, and why blindly trusting your recruiting tech could expose you to more than just bad hires. Unchecked, it could lead to lawsuits you never saw coming.

Military AI Is Here, and We’re Not Ready for the Moral Tradeoffs

From autonomous fighter jet simulations to OpenAI defense contracts, military AI is no longer theoretical; it’s operational. The U.S. Army is staffing up with Silicon Valley execs. AI drones are already shaping modern warfare. But what happens when decisions of life and death get reduced to “green bars” on output reports? I reflect on why we need more than technical and military experts in the room and what history teaches us about what’s lost when we separate force from humanity.

If this episode was helpful, would you share it with someone? Also, leave a rating, drop a comment, and follow for future breakdowns that go beyond the headlines and help you lead with clarity in the AI age.

Show Notes:

In this Weekly Update, Christopher Lind unpacks four critical developments in AI this week. First, he starts by breaking down Stanford’s research on AI therapists and the alarming shortcomings in how large language models handle mental health crises. Then, he explores Microsoft’s new workplace forecast, which predicts a sharp rise in agent-based AI tools and the hidden demands this shift will place on employees. Next, he analyzes the legal storm brewing around Workday’s recruiting AI and what this could mean for hiring practices industry-wide. Finally, he closes with a timely look at the growing militarization of AI and why ethical oversight is being outpaced by technological ambition.

Timestamps:

00:00 – Introduction

01:05 – Episode Overview

02:15 – Stanford’s Study on AI Therapists

18:23 – Microsoft’s Agent Boss Predictions

30:55 – Workday’s AI Bias Lawsuit

43:38 – Military AI and Moral Consequences

52:59 – Final Thoughts and Wrap-Up

#StanfordAI #AItherapy #AgentBosses #MicrosoftWorkTrend #WorkdayLawsuit #AIbias #MilitaryAI #AIethics #FutureOfWork #AIstrategy #DigitalLeadership

  continue reading

352 episodes

Artwork
iconShare
 
Manage episode 489885033 series 3593966
Content provided by Christopher Lind. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Christopher Lind or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Happy Friday, everyone! This week I’m back to my usual four updates, and while they may seem disconnected on the surface, you’ll see some bigger threads running through them all.

All seem to indicate we’re outsourcing to AI faster than we can supervise, are layering automation on top of bias without addressing the root issues, and letting convenience override discernment in places that carry life-or-death stakes.

With that, let’s get into it.

Stanford’s AI Therapy Study Shows We’re Automating Harm

New research from Stanford tested how today’s top LLMs are handling crisis counseling, and the results are disturbing. From stigmatizing mental illness to recommending dangerous actions in crisis scenarios, these AI therapists aren’t just “not ready”… they are making things worse. I walk through what the study got right, where even its limitations point to deeper risk, and why human experience shouldn’t be replaced by synthetic empathy.

Microsoft Says You’ll Be Training AI Agents Soon, Like It or Not

In Microsoft’s new 2025 Work Trend Index, 41% of leaders say they expect their teams to be training AI agents in the next five years. And 36% believe they’ll be managing them. If you’re hearing “agent boss” and thinking “not my problem,” think again. This isn’t a future trend; it’s already happening. I break down what AI agents really are, how they’ll change daily work, and why organizations can’t just bolt them on without first measuring human readiness.

Workday’s Bias Lawsuit Could Reshape AI Hiring

Workday is being sued over claims that its hiring algorithms discriminated against candidates based on race, age, and disability status. But here’s the real issue: most companies can’t even explain how their AI hiring tools make decisions. I unpack why this lawsuit could set a critical precedent, how leaders should respond now, and why blindly trusting your recruiting tech could expose you to more than just bad hires. Unchecked, it could lead to lawsuits you never saw coming.

Military AI Is Here, and We’re Not Ready for the Moral Tradeoffs

From autonomous fighter jet simulations to OpenAI defense contracts, military AI is no longer theoretical; it’s operational. The U.S. Army is staffing up with Silicon Valley execs. AI drones are already shaping modern warfare. But what happens when decisions of life and death get reduced to “green bars” on output reports? I reflect on why we need more than technical and military experts in the room and what history teaches us about what’s lost when we separate force from humanity.

If this episode was helpful, would you share it with someone? Also, leave a rating, drop a comment, and follow for future breakdowns that go beyond the headlines and help you lead with clarity in the AI age.

Show Notes:

In this Weekly Update, Christopher Lind unpacks four critical developments in AI this week. First, he starts by breaking down Stanford’s research on AI therapists and the alarming shortcomings in how large language models handle mental health crises. Then, he explores Microsoft’s new workplace forecast, which predicts a sharp rise in agent-based AI tools and the hidden demands this shift will place on employees. Next, he analyzes the legal storm brewing around Workday’s recruiting AI and what this could mean for hiring practices industry-wide. Finally, he closes with a timely look at the growing militarization of AI and why ethical oversight is being outpaced by technological ambition.

Timestamps:

00:00 – Introduction

01:05 – Episode Overview

02:15 – Stanford’s Study on AI Therapists

18:23 – Microsoft’s Agent Boss Predictions

30:55 – Workday’s AI Bias Lawsuit

43:38 – Military AI and Moral Consequences

52:59 – Final Thoughts and Wrap-Up

#StanfordAI #AItherapy #AgentBosses #MicrosoftWorkTrend #WorkdayLawsuit #AIbias #MilitaryAI #AIethics #FutureOfWork #AIstrategy #DigitalLeadership

  continue reading

352 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play