Talk Python to Me is a weekly podcast hosted by developer and entrepreneur Michael Kennedy. We dive deep into the popular packages and software developers, data scientists, and incredible hobbyists doing amazing things with Python. If you're new to Python, you'll quickly learn the ins and outs of the community by hearing from the leaders. And if you've been Pythoning for years, you'll learn about your favorite packages and the hot new ones coming out of open source.
…
continue reading
Content provided by Linear Digressions, Ben Jaffe, and Katie Malone. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Linear Digressions, Ben Jaffe, and Katie Malone or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Zeroing in on what makes adversarial examples possible
MP3•Episode home
Manage episode 250836357 series 2527355
Content provided by Linear Digressions, Ben Jaffe, and Katie Malone. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Linear Digressions, Ben Jaffe, and Katie Malone or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Adversarial examples are really, really weird: pictures of penguins that get classified with high certainty by machine learning algorithms as drumsets, or random noise labeled as pandas, or any one of an infinite number of mistakes in labeling data that humans would never make but computers make with joyous abandon. What gives? A compelling new argument makes the case that it’s not the algorithms so much as the features in the datasets that holds the clue. This week’s episode goes through several papers pushing our collective understanding of adversarial examples, and giving us clues to what makes these counterintuitive cases possible. Relevant links: https://arxiv.org/pdf/1905.02175.pdf https://arxiv.org/pdf/1805.12152.pdf https://distill.pub/2019/advex-bugs-discussion/ https://arxiv.org/pdf/1911.02508.pdf
…
continue reading
291 episodes
MP3•Episode home
Manage episode 250836357 series 2527355
Content provided by Linear Digressions, Ben Jaffe, and Katie Malone. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Linear Digressions, Ben Jaffe, and Katie Malone or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Adversarial examples are really, really weird: pictures of penguins that get classified with high certainty by machine learning algorithms as drumsets, or random noise labeled as pandas, or any one of an infinite number of mistakes in labeling data that humans would never make but computers make with joyous abandon. What gives? A compelling new argument makes the case that it’s not the algorithms so much as the features in the datasets that holds the clue. This week’s episode goes through several papers pushing our collective understanding of adversarial examples, and giving us clues to what makes these counterintuitive cases possible. Relevant links: https://arxiv.org/pdf/1905.02175.pdf https://arxiv.org/pdf/1805.12152.pdf https://distill.pub/2019/advex-bugs-discussion/ https://arxiv.org/pdf/1911.02508.pdf
…
continue reading
291 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.