Artwork

Content provided by MLSecOps.com. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by MLSecOps.com or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
icon Daily Deals

AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 1)

37:10
 
Share
 

Manage episode 377528844 series 3461851
Content provided by MLSecOps.com. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by MLSecOps.com or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Send us a text

*This episode is also available in video format! Click to watch the full YouTube video.*
Welcome to the final episode of the first season of The MLSecOps Podcast, brought to you by the team at Protect AI.

In this two-part episode, we’ll be taking a look back at some favorite highlights from the season where we dove deep into machine learning security operations. In this first part, we’ll be revisiting clips related to things like adversarial machine learning; how malicious actors can use AI to fool machine learning systems into making incorrect decisions; supply chain vulnerabilities; and red teaming for AI/ML, including how security professionals might simulate attacks on their own systems to detect and mitigate vulnerabilities.
If you’re new to the show, or if you could use a refresher on any of these topics, this episode is for you, as it’s a great place for listeners to start their learning journey with us and work backwards based on individual interests. And when something in this recap piques your interest, be sure to check out the transcript for links to the full-length episodes where each of these clips came from. You can visit the website and read the transcripts at www.mlsecops.com/podcast.
So now, we invite you to sit back, relax, and enjoy this Season 1 recap of some of the most important MLSecOps topics of the year. And stay tuned for part 2 of this episode, where we’ll be revisiting MLSecOps conversations surrounding governance, risk, and compliance, model provenance, and Trusted AI. Thanks for listening.
Chapters:
0:00 Opening
0:25 Intro by Charlie McCarthy, MLSecOps Community Leader
2:15 S1E1 with Guest Disesdi Susanna Cox
5:08 S1E2 with Guest Dr. Florian Tramèr
10:16 S1E3 with Guest Pin-Yu Chen, PhD
13:18 S1E5 with Guest Christina Liaghati, PhD
17:59 S1E6 with Guest Johann Rehberger
22:10 S1E10 with Guest Kai Greshake
27:14 S1E11 with Guest Shreya Rajpal
31:45 S1E12 with Guest Apostol Vassilev
36:36 End/Credits
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

53 episodes

Artwork
iconShare
 
Manage episode 377528844 series 3461851
Content provided by MLSecOps.com. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by MLSecOps.com or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Send us a text

*This episode is also available in video format! Click to watch the full YouTube video.*
Welcome to the final episode of the first season of The MLSecOps Podcast, brought to you by the team at Protect AI.

In this two-part episode, we’ll be taking a look back at some favorite highlights from the season where we dove deep into machine learning security operations. In this first part, we’ll be revisiting clips related to things like adversarial machine learning; how malicious actors can use AI to fool machine learning systems into making incorrect decisions; supply chain vulnerabilities; and red teaming for AI/ML, including how security professionals might simulate attacks on their own systems to detect and mitigate vulnerabilities.
If you’re new to the show, or if you could use a refresher on any of these topics, this episode is for you, as it’s a great place for listeners to start their learning journey with us and work backwards based on individual interests. And when something in this recap piques your interest, be sure to check out the transcript for links to the full-length episodes where each of these clips came from. You can visit the website and read the transcripts at www.mlsecops.com/podcast.
So now, we invite you to sit back, relax, and enjoy this Season 1 recap of some of the most important MLSecOps topics of the year. And stay tuned for part 2 of this episode, where we’ll be revisiting MLSecOps conversations surrounding governance, risk, and compliance, model provenance, and Trusted AI. Thanks for listening.
Chapters:
0:00 Opening
0:25 Intro by Charlie McCarthy, MLSecOps Community Leader
2:15 S1E1 with Guest Disesdi Susanna Cox
5:08 S1E2 with Guest Dr. Florian Tramèr
10:16 S1E3 with Guest Pin-Yu Chen, PhD
13:18 S1E5 with Guest Christina Liaghati, PhD
17:59 S1E6 with Guest Johann Rehberger
22:10 S1E10 with Guest Kai Greshake
27:14 S1E11 with Guest Shreya Rajpal
31:45 S1E12 with Guest Apostol Vassilev
36:36 End/Credits
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

53 episodes

All episodes

×
 
Send us a text Researchers Yifeng (Ethan) He and Peter Rong join host Madi Vorbrich to break down their paper "Security of AI Agents." They explore real-world AI agent threats, like session hijacks and tool-based jailbreaks, and share practical defenses, from sandboxing to agent-to-agent protocols. Full transcript with links to resources available at https://mlsecops.com/podcast/ai-agent-security-threats-defenses-for-modern-deployments Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
T
The MLSecOps Podcast
The MLSecOps Podcast podcast artwork
 
Send us a text Part 2 with Gavin Klondike dives into autonomous AI agents—how they really work, the attack paths they open, and practical defenses like least-privilege APIs and out-of-band auth. A must-listen roadmap for anyone building—or defending—the next generation of AI applications. Full transcript with links to resources available at https://mlsecops.com/podcast/autonomous-agents-beyond-the-hype Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text In Part 1 of this two-part MLSecOps Podcast, Principal Security Consultant Gavin Klondike joins Dan and Marcello to break down the real threats facing AI systems today. From prompt injection misconceptions to indirect exfiltration via markdown and the failures of ML Ops security practices, Gavin unpacks what the industry gets wrong—and how to fix it. Full transcript with links to resources available at https://mlsecops.com/podcast/beyond-prompt-injection-ais-real-security-gaps Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text What’s really hot at RSA Conference 2025? MLSecOps Community Manager Madi Vorbrich sits down with Protect AI Co‑Founder Daryan “D” Dehghanpisheh for a rapid rundown of must‑see sessions, booth events, and emerging AI‑security trends—from GenAI agents to zero‑trust AI and million‑model scans. Use this episode to build a bullet‑proof RSA agenda before you land in San Francisco. Full transcript with links to resources available at https://mlsecops.com/podcast/whats-hot-in-ai-security-at-rsa-conference-2025 Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text In this episode of the MLSecOps Podcast, we sit down with three expert contributors from the Cloud Security Alliance’s AI Controls Matrix working group. They reveal how this newly released framework addresses emerging AI threats—like model poisoning and adversarial manipulation—through robust technical controls, detailed implementation guidelines, and clear auditing strategies. Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-the-cloud-security-alliance-ai-controls-matrix Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Join Keith Hoodlet from Trail of Bits as he dives into AI/ML security, discussing everything from prompt injection and fuzzing techniques to bias testing and compliance challenges. Full transcript with links to resources available at https://mlsecops.com/podcast/from-pickle-files-to-polyglots-hidden-risks-in-ai-supply-chains Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/rethinking-ai-red-teaming-lessons-in-zero-trust-and-model-protection This episode is a follow up to Part 1 of our conversation with returning guest Brian Pendleton, as he challenges the way we think about red teaming and security for AI. Continuing from last week’s exploration of enterprise AI adoption and high-level security considerations, the conversation now shifts to how red teaming, zero trust, and privacy concerns intertwine with AI’s unique risks. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/ai-security-map-it-manage-it-master-it In part one of our two-part MLSecOps Podcast episode, security veteran Brian Pendleton takes us from his early hacker days to the forefront of AI security. Brian explains why mapping every AI integration is essential for uncovering vulnerabilities. He also dives into the benefits of using SBOMs over model cards for risk management and stresses the need to bridge the gap between ML and security teams to protect your enterprise AI ecosystem. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/agentic-ai-tackling-data-security-and-compliance-risks Join host Diana Kelley and CTO Dr. Gina Guillaume-Joseph as they explore how agentic AI, robust data practices, and zero trust principles drive secure, real-time video analytics at Camio. They discuss why clean data is essential, how continuous model validation can thwart adversarial threats, and the critical balance between autonomous AI and human oversight. Dive into the world of multimodal modeling, ethical safeguards, and what it takes to ensure AI remains both innovative and risk-aware. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploits Join host Dan McInerney and AI security expert Sierra Haex as they explore the evolving challenges of AI security. They discuss vulnerabilities in ML supply chains, the risks in tools like Ray and untested AI model files, and how traditional security measures intersect with emerging AI threats. The conversation also covers the rise of open-source models like DeepSeek and the security implications of deploying autonomous AI agents, offering critical insights for anyone looking to secure distributed AI systems. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/implementing-a-robust-ai-governance-framework-for-business-success In this episode of the MLSecOps podcast, host Charlie McCarthy sits down with Chris McClean, Global Lead for Digital Ethics at Avanade, to explore the world of responsible AI governance. They discuss how ethical principles, risk management, and robust security practices can be integrated throughout the AI lifecycle—from design and development to deployment and oversight. Learn practical strategies for building resilient AI frameworks, understanding regulatory impacts, and driving innovation safely. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-generative-ai-red-teaming-and-practical-security-solutions In this episode, we explore LLM red teaming beyond simple “jailbreak” prompts with special guest Donato Capitella, from WithSecure Consulting. You’ll learn why vulnerabilities live in context—how LLMs interact with users, tools, and documents—and discover best practices for mitigating attacks like prompt injection. Our guest also previews an open-source tool for automating security tests on LLM applications. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text In this episode of the MLSecOps Podcast , the team dives into the transformative potential of Vulnhuntr: zero shot vulnerability discovery using LLMs . Madison Vorbrich hosts Dan McInerney and Marcello Salvati to discuss Vulnhuntr’s ability to autonomously identify vulnerabilities, including zero-days, using large language models (LLMs) like Claude. They explore the evolution of AI tools for security, the gap between traditional and AI-based static code analysis, and how Vulnhuntr enables both developers and security teams to proactively safeguard their projects. The conversation also highlights Protect AI’s bug bounty platform, huntr.com, and its expansion into model file vulnerabilities (MFVs), emphasizing the critical need to secure AI supply chains and systems. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/ai-governance-essentials-empowering-procurement-teams-to-navigate-ai-risk . In this episode of the MLSecOps Podcast, Charlie McCarthy from Protect AI sits down with Dr. Cari Miller to discuss the evolving landscapes of AI procurement and governance. Dr. Miller shares insights from her work with the AI Procurement Lab and ForHumanity, delving into the essential frameworks and strategies needed to mitigate risks in AI acquisitions. They cover the AI Procurement Risk Management Framework, practical ways to ensure transparency and accountability, and how the September 2024 OMB Memo M-24-18 is guiding AI acquisition in government. Dr. Miller also emphasizes the importance of cross-functional collaboration and AI literacy to support responsible AI procurement and deployment in organizations of all types. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text In this episode of the MLSecOps Podcast, Distinguished Engineer Nicole Nichols from Palo Alto Networks joins host and Machine Learning Scientist Mehrin Kiani to explore critical challenges in AI and cybersecurity. Nicole shares her unique journey from mechanical engineering to AI security, her thoughts on the importance of clear AI vocabularies, and the significance of bridging disciplines in securing complex systems. They dive into the nuanced definitions of AI fairness and safety, examine emerging threats like LLM backdoors, and discuss the rapidly evolving impact of autonomous AI agents on cybersecurity defense. Nicole’s insights offer a fresh perspective on the future of AI-driven security, teamwork, and the growth mindset essential for professionals in this field. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play