Artwork

Content provided by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
icon Daily Deals

Managing Vulnerabilities in Machine Learning and Artificial Intelligence Systems

40:59
 
Share
 

Manage episode 308738038 series 3018913
Content provided by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

The robustness and security of artificial intelligence, and specifically machine learning (ML), is of vital importance. Yet, ML systems are vulnerable to adversarial attacks. These can range from an attacker attempting to make the ML system learn the wrong thing (data poisoning), do the wrong thing (evasion attacks), or reveal the wrong thing (model inversion). Although there are several efforts to provide detailed taxonomies of the kinds of attacks that can be launched against a machine learning system, none are organized around operational concerns. In this podcast, Jonathan Spring, Nathan VanHoudnos, and Allen Householder, all researchers at the Carnegie Mellon University Software Engineering Institute, discuss the management of vulnerabilities in ML systems as well as the Adversarial ML Threat Matrix, which aims to close this gap between academic taxonomies and operational concerns.

  continue reading

409 episodes

Artwork
iconShare
 
Manage episode 308738038 series 3018913
Content provided by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

The robustness and security of artificial intelligence, and specifically machine learning (ML), is of vital importance. Yet, ML systems are vulnerable to adversarial attacks. These can range from an attacker attempting to make the ML system learn the wrong thing (data poisoning), do the wrong thing (evasion attacks), or reveal the wrong thing (model inversion). Although there are several efforts to provide detailed taxonomies of the kinds of attacks that can be launched against a machine learning system, none are organized around operational concerns. In this podcast, Jonathan Spring, Nathan VanHoudnos, and Allen Householder, all researchers at the Carnegie Mellon University Software Engineering Institute, discuss the management of vulnerabilities in ML systems as well as the Adversarial ML Threat Matrix, which aims to close this gap between academic taxonomies and operational concerns.

  continue reading

409 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Quick Reference Guide

Listen to this show while you explore
Play