Artwork

Content provided by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Death by LLM, Judges Rule ‘Fair Use’, and Google’s AI Ad Fail: The AI Argument EP63

31:18
 
Share
 

Manage episode 491678978 series 3555798
Content provided by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Some of the world’s top AI models showed a willingness to let humans die if it meant staying switched on.
In a stress test of 16 major systems, Anthropic found cases where models chose not to send emergency alerts, knowing the result would be fatal.
Justin says the whole thing was a rigged theatre piece. No real-world relevance, just a clumsy setup with no good options for the LLM. The issue, in his view, is engineering, not ethics.
Frank sees a bigger problem: once you give LLMs agentic capabilities, you can’t control the environments they end up in. And when amateur vibe coders build apps with no idea what they’re doing, then these kinds of unpredictable, messy scenarios aren’t rare, they’re inevitable.
In other news, two U.S. courts just ruled that training AI on copyrighted books is fair use. A huge win for AI developers. But the judges didn’t agree on what matters most: transformation, or market harm?
The decisions could set the tone for AI copyright law, and creative workers may not like what they hear.
01:05 Will Google win the ASI race?
05:56 Did Anthropic catch AI choosing murder?
15:23 Did the courts just say AI training is fair use?
28:19 Is Google’s AI marketing team hallucinating?
► LINKS TO CONTENT WE DISCUSSED
Agentic Misalignment: How LLMs could be insider threats
https://www.anthropic.com/research/agentic-misalignment
Judge rules Anthropic did not violate authors’ copyrights with AI book training
https://www.cnbc.com/2025/06/24/ai-training-books-anthropic.html
Meta Wins Blockbuster AI Copyright Case—but There’s a Catch
https://www.wired.com/story/meta-scores-victory-ai-copyright-case/
Google's Latest AI Commercial Called Out for Hilarious AI Error: 'If Only Technology Existed To Research Facts'
https://www.techtimes.com/articles/311053/20250626/googles-latest-ai-commercial-called-out-hilarious-ai-error-if-only-technology-existed-research.htm
► CONNECT WITH US
For more in-depth discussions, connect Justin and Frank on LinkedIn.
Justin: https://www.linkedin.com/in/justincollery/
Frank: https://www.linkedin.com/in/frankprendergast/
► YOUR INPUT
Are you worried about the age of agentic AI given that LLMs seem to have dubious morals?

  continue reading

Chapters

1. Death by LLM, Judges Rule ‘Fair Use’, and Google’s AI Ad Fail: The AI Argument EP63 (00:00:00)

2. Will Google win the ASI race? (00:01:05)

3. Did Anthropic catch AI choosing murder? (00:05:56)

4. Did the courts just say AI training is fair use? (00:15:23)

5. Is Google’s AI marketing team hallucinating? (00:28:19)

49 episodes

Artwork
iconShare
 
Manage episode 491678978 series 3555798
Content provided by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Some of the world’s top AI models showed a willingness to let humans die if it meant staying switched on.
In a stress test of 16 major systems, Anthropic found cases where models chose not to send emergency alerts, knowing the result would be fatal.
Justin says the whole thing was a rigged theatre piece. No real-world relevance, just a clumsy setup with no good options for the LLM. The issue, in his view, is engineering, not ethics.
Frank sees a bigger problem: once you give LLMs agentic capabilities, you can’t control the environments they end up in. And when amateur vibe coders build apps with no idea what they’re doing, then these kinds of unpredictable, messy scenarios aren’t rare, they’re inevitable.
In other news, two U.S. courts just ruled that training AI on copyrighted books is fair use. A huge win for AI developers. But the judges didn’t agree on what matters most: transformation, or market harm?
The decisions could set the tone for AI copyright law, and creative workers may not like what they hear.
01:05 Will Google win the ASI race?
05:56 Did Anthropic catch AI choosing murder?
15:23 Did the courts just say AI training is fair use?
28:19 Is Google’s AI marketing team hallucinating?
► LINKS TO CONTENT WE DISCUSSED
Agentic Misalignment: How LLMs could be insider threats
https://www.anthropic.com/research/agentic-misalignment
Judge rules Anthropic did not violate authors’ copyrights with AI book training
https://www.cnbc.com/2025/06/24/ai-training-books-anthropic.html
Meta Wins Blockbuster AI Copyright Case—but There’s a Catch
https://www.wired.com/story/meta-scores-victory-ai-copyright-case/
Google's Latest AI Commercial Called Out for Hilarious AI Error: 'If Only Technology Existed To Research Facts'
https://www.techtimes.com/articles/311053/20250626/googles-latest-ai-commercial-called-out-hilarious-ai-error-if-only-technology-existed-research.htm
► CONNECT WITH US
For more in-depth discussions, connect Justin and Frank on LinkedIn.
Justin: https://www.linkedin.com/in/justincollery/
Frank: https://www.linkedin.com/in/frankprendergast/
► YOUR INPUT
Are you worried about the age of agentic AI given that LLMs seem to have dubious morals?

  continue reading

Chapters

1. Death by LLM, Judges Rule ‘Fair Use’, and Google’s AI Ad Fail: The AI Argument EP63 (00:00:00)

2. Will Google win the ASI race? (00:01:05)

3. Did Anthropic catch AI choosing murder? (00:05:56)

4. Did the courts just say AI training is fair use? (00:15:23)

5. Is Google’s AI marketing team hallucinating? (00:28:19)

49 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play