Artwork

Content provided by Sequoia Capital. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sequoia Capital or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability

47:07
 
Share
 

Manage episode 493284591 series 3586723
Content provided by Sequoia Capital. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sequoia Capital or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Eric Ho is building Goodfire to solve one of AI’s most critical challenges: understanding what’s actually happening inside neural networks. His team is developing techniques to understand, audit and edit neural networks at the feature level. Eric discusses breakthrough results in resolving superposition through sparse autoencoders, successful model editing demonstrations and real-world applications in genomics with Arc Institute's DNA foundation models. He argues that interpretability will be critical as AI systems become more powerful and take on mission-critical roles in society.

Hosted by Sonya Huang and Roelof Botha, Sequoia Capital

Mentioned in this episode:

  continue reading

55 episodes

Artwork
iconShare
 
Manage episode 493284591 series 3586723
Content provided by Sequoia Capital. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sequoia Capital or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Eric Ho is building Goodfire to solve one of AI’s most critical challenges: understanding what’s actually happening inside neural networks. His team is developing techniques to understand, audit and edit neural networks at the feature level. Eric discusses breakthrough results in resolving superposition through sparse autoencoders, successful model editing demonstrations and real-world applications in genomics with Arc Institute's DNA foundation models. He argues that interpretability will be critical as AI systems become more powerful and take on mission-critical roles in society.

Hosted by Sonya Huang and Roelof Botha, Sequoia Capital

Mentioned in this episode:

  continue reading

55 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play