Artwork

Content provided by World Wide Technology. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by World Wide Technology or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Data Poisoning, Prompt Injection and Exploring Vulnerabilities of Gen AI

34:13
 
Share
 

Manage episode 482736195 series 3644905
Content provided by World Wide Technology. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by World Wide Technology or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

AI systems are only as trustworthy as the data they're trained on — but what happens when that data is intentionally corrupted? WWT's Justin Hadler and Chance Cornell break down the growing threats of data poisoning and prompt injection, explore the challenges of securing AI pipelines and dive into why the next big cybersecurity frontier starts inside the model.

Learn more about this week's guests:

Chance Cornell joined WWT in 2021 as an intern and is now a Technical Solutions Engineer on the Users and Devices Team, focusing on Endpoint Security, including EDR and XDR. He has developed labs and content around technologies like CrowdStrike, Elastic, and Fortinet. Chance holds a B.S. in Computer Engineering from the University of South Florida.

Chance's top pick: Hands-On Lab Workshop: LLM Security

Justin Hadler is a seasoned technologist with over 20 years of experience in sales and technology. As a Technical Solutions Architect at WWT, he specializes in AI Security. Previously, Justin was a Regional Sales Manager at BigID, helping organizations tackle data discovery and classification challenges. He holds multiple certifications, including CCIE and CISSP, and degrees in Finance, MIS, and Economics. Justin is also Vice Chairman of Search Ministries' Leadership Board and mentors the next generation in tech and sales.

Justin's top pick: Secure Your Future: A CISO's Guide to AI

Support for this episode provided by: Crowdstrike

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

  continue reading

Chapters

1. The New AI Threat: Data Poisoning (00:00:00)

2. Understanding AI Vulnerabilities (00:01:25)

3. Data Poisoning in Action (00:04:06)

4. Security Risks and Corporate Pressure (00:07:28)

5. Defending Against Malicious Attacks (00:12:56)

6. Agentic Security and Future Challenges (00:19:50)

7. Key Takeaways on AI Security (00:25:33)

17 episodes

Artwork
iconShare
 
Manage episode 482736195 series 3644905
Content provided by World Wide Technology. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by World Wide Technology or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

AI systems are only as trustworthy as the data they're trained on — but what happens when that data is intentionally corrupted? WWT's Justin Hadler and Chance Cornell break down the growing threats of data poisoning and prompt injection, explore the challenges of securing AI pipelines and dive into why the next big cybersecurity frontier starts inside the model.

Learn more about this week's guests:

Chance Cornell joined WWT in 2021 as an intern and is now a Technical Solutions Engineer on the Users and Devices Team, focusing on Endpoint Security, including EDR and XDR. He has developed labs and content around technologies like CrowdStrike, Elastic, and Fortinet. Chance holds a B.S. in Computer Engineering from the University of South Florida.

Chance's top pick: Hands-On Lab Workshop: LLM Security

Justin Hadler is a seasoned technologist with over 20 years of experience in sales and technology. As a Technical Solutions Architect at WWT, he specializes in AI Security. Previously, Justin was a Regional Sales Manager at BigID, helping organizations tackle data discovery and classification challenges. He holds multiple certifications, including CCIE and CISSP, and degrees in Finance, MIS, and Economics. Justin is also Vice Chairman of Search Ministries' Leadership Board and mentors the next generation in tech and sales.

Justin's top pick: Secure Your Future: A CISO's Guide to AI

Support for this episode provided by: Crowdstrike

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

  continue reading

Chapters

1. The New AI Threat: Data Poisoning (00:00:00)

2. Understanding AI Vulnerabilities (00:01:25)

3. Data Poisoning in Action (00:04:06)

4. Security Risks and Corporate Pressure (00:07:28)

5. Defending Against Malicious Attacks (00:12:56)

6. Agentic Security and Future Challenges (00:19:50)

7. Key Takeaways on AI Security (00:25:33)

17 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play