Artwork

Content provided by conversationswithkate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by conversationswithkate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Exposing LLM Vulnerabilities

26:50
 
Share
 

Manage episode 491860612 series 3663044
Content provided by conversationswithkate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by conversationswithkate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What if the very tools designed to make us smarter are also making us vulnerable?
A single prompt. A subtle tweak. A forgotten language. That’s all it takes.

As LLMs weave themselves into the fabric of our daily lives, their promise feels limitless — until you look beneath the surface. In this wide-ranging and quietly urgent conversation, Kate and Andrew explore the evolving landscape of AI vulnerabilities, from adversarial attacks and prompt injections to multilingual blind spots and poisoned training data. They share stories from real-world projects, reflect on the role of collaborative tools in catching threats early, and unpack why even small teams must prioritise security from day one.

Together, they don’t just highlight what can go wrong — they illuminate the pathways forward. This is a thoughtful, human-centred episode about risk, responsibility, and the power of working together in a rapidly changing world.

This is one of those episodes that stays with you long after the headlines fade.

  continue reading

4 episodes

Artwork
iconShare
 
Manage episode 491860612 series 3663044
Content provided by conversationswithkate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by conversationswithkate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What if the very tools designed to make us smarter are also making us vulnerable?
A single prompt. A subtle tweak. A forgotten language. That’s all it takes.

As LLMs weave themselves into the fabric of our daily lives, their promise feels limitless — until you look beneath the surface. In this wide-ranging and quietly urgent conversation, Kate and Andrew explore the evolving landscape of AI vulnerabilities, from adversarial attacks and prompt injections to multilingual blind spots and poisoned training data. They share stories from real-world projects, reflect on the role of collaborative tools in catching threats early, and unpack why even small teams must prioritise security from day one.

Together, they don’t just highlight what can go wrong — they illuminate the pathways forward. This is a thoughtful, human-centred episode about risk, responsibility, and the power of working together in a rapidly changing world.

This is one of those episodes that stays with you long after the headlines fade.

  continue reading

4 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play