Content provided by The Robot Brains Podcast and Pieter Abbeel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Robot Brains Podcast and Pieter Abbeel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Geoff Hinton on revolutionizing artificial intelligence... again
MP3•Episode home
Manage episode 363052228 series 3475229
Content provided by The Robot Brains Podcast and Pieter Abbeel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Robot Brains Podcast and Pieter Abbeel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Over the past ten years, AI has experienced breakthrough after breakthrough in everything from computer vision to speech recognition, protein folding prediction, and so much more.
Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. A recipient of the Turing Award, the equivalent of the Nobel prize for computer science, he has over half a million citations of his work.
Hinton has spent about half a century on deep learning, most of the time researching in relative obscurity. But that all changed in 2012 when Hinton and his students showed deep learning is better at image recognition than any other approaches to computer vision, and by a very large margin. That result, that moment, known as the ImageNet moment, changed the whole AI field. Pretty much everyone dropped what they had been doing and switched to deep learning.
Geoff joins Pieter in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey from academia to Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the purpose of sleep; and why it’s better to grow our computers than manufacture them.
SUBSCRIBE TO THE ROBOT BRAINS PODCAST TODAY | Visit therobotbrains.ai and follow us on YouTube at TheRobotBrainsPodcast, Twitter @therobotbrains, and Instagram @therobotbrains.
Hosted on Acast. See acast.com/privacy for more information.
67 episodes
MP3•Episode home
Manage episode 363052228 series 3475229
Content provided by The Robot Brains Podcast and Pieter Abbeel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Robot Brains Podcast and Pieter Abbeel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Over the past ten years, AI has experienced breakthrough after breakthrough in everything from computer vision to speech recognition, protein folding prediction, and so much more.
Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. A recipient of the Turing Award, the equivalent of the Nobel prize for computer science, he has over half a million citations of his work.
Hinton has spent about half a century on deep learning, most of the time researching in relative obscurity. But that all changed in 2012 when Hinton and his students showed deep learning is better at image recognition than any other approaches to computer vision, and by a very large margin. That result, that moment, known as the ImageNet moment, changed the whole AI field. Pretty much everyone dropped what they had been doing and switched to deep learning.
Geoff joins Pieter in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey from academia to Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the purpose of sleep; and why it’s better to grow our computers than manufacture them.
SUBSCRIBE TO THE ROBOT BRAINS PODCAST TODAY | Visit therobotbrains.ai and follow us on YouTube at TheRobotBrainsPodcast, Twitter @therobotbrains, and Instagram @therobotbrains.
Hosted on Acast. See acast.com/privacy for more information.
67 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.