Artwork

Content provided by Bruce Bracken. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Bruce Bracken or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Scott & Mark Learn To… Induced Hallucinations

25:07
 
Share
 

Manage episode 485462846 series 3608870
Content provided by Bruce Bracken. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Bruce Bracken or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In this episode of Scott and Mark Learn To, Scott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.

Takeaways:

  • AI is getting better, but we still need to be careful and double check our work
  • AI sometimes gives wrong answers confidently
  • Jailbreaks break the rules on purpose, while hallucinations are just AI making stuff up

Who are they?

View Scott Hanselman on LinkedIn

View Mark Russinovich on LinkedIn

Watch Scott and Mark Learn on YouTube

Listen to other episodes at scottandmarklearn.to

Discover and follow other Microsoft podcasts at microsoft.com/podcasts


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

19 episodes

Artwork
iconShare
 
Manage episode 485462846 series 3608870
Content provided by Bruce Bracken. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Bruce Bracken or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In this episode of Scott and Mark Learn To, Scott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.

Takeaways:

  • AI is getting better, but we still need to be careful and double check our work
  • AI sometimes gives wrong answers confidently
  • Jailbreaks break the rules on purpose, while hallucinations are just AI making stuff up

Who are they?

View Scott Hanselman on LinkedIn

View Mark Russinovich on LinkedIn

Watch Scott and Mark Learn on YouTube

Listen to other episodes at scottandmarklearn.to

Discover and follow other Microsoft podcasts at microsoft.com/podcasts


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

19 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play