Artwork

Content provided by Brian Bell. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Brian Bell or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Ignite AI: Minha Hwang on Scaling AI Experiments and Building Smarter Models with Less Data | Ep167

36:54
 
Share
 

Manage episode 487747051 series 3515266
Content provided by Brian Bell. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Brian Bell or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Minha Hwang is a principal applied scientist with a rare blend of technical depth and cross-disciplinary experience—holding dual PhDs in materials science and marketing science. He’s spent his career at the intersection of machine learning, experimentation, and causal inference, helping scale some of the most sophisticated AI evaluation systems in the world. Prior to his current work, Minha helped launch a data science arm within a major consulting firm and served as a business school professor focused on marketing analytics and statistical decision-making.In this episode, Minha shares why false positives are rampant in real-world experiments, how to make A/B testing more sensitive and reliable, and why most machine learning teams overlook the power of causal inference. We also explore the growing importance of reinforcement learning, open-weight models, and how to evaluate AI when traditional metrics fall short.In Today’s Episode We Discuss:00:00 Intro00:40 Minha’s Engineering Roots and PhD at MIT01:55 Jumping from Engineering to Consulting at McKinsey03:15 Why He Went Back for a Second PhD04:35 Transition from Academia to Applied Data Science06:00 Building McKinsey’s Data Science Arm07:30 Moving to Microsoft to Explore Unstructured Data08:40 Making A/B Testing More Sensitive with ML10:00 Why False Positives Are a Massive Problem11:05 How to Validate Experiments Through “Solidification”12:10 The Importance of Proxy and Debugging Metrics13:35 Model Compression and Quantization Explained15:00 Balancing Statistical Rigor with Product Speed16:30 Why Data, Not Model Training, Is the Bottleneck18:00 Causal Inference vs. Machine Learning20:00 Measuring What You Can’t Observe21:15 The Missing Role of Causality in AI Education22:15 Reinforcement Learning and the Data Scarcity Problem23:40 The Rise of Open-Weight Models Like DeepSeek25:00 Can Open Source Overtake Closed Labs?26:15 IP Grey Areas in Foundation Model Training27:35 Multimodal Models and the Future of Robotics29:20 Simulated Environments and Physical AI30:25 AGI, Overfitting, and the Benchmark Illusion32:00 Practical Usefulness over Philosophical Debates33:25 Most Underrated Metrics in A/B Testing34:35 Favorite AI Papers and Experimentation Tools36:30 Measuring Preferences with Discrete Choice Models36:55 OutroSubscribe on Spotify:https://open.spotify.com/show/6Ga6v0YUsHotLhjap67uu5Subscribe on Apple Podcasts:https://podcasts.apple.com/us/podcast/ignite-conversations-on-startups-venture-capital-tech/id1709248824Follow Brian Bell on X:https://x.com/brianrbell?lang=enFollow Minha Hwang on LinkedIn:https://www.linkedin.com/in/minha-hwang-7440771/Follow Minha Hwang on Twitter:https://x.com/minhahwangVisit Our Website:https://www.teamignite.ventures/👂🎧 Watch, listen, and follow on your favorite platform: https://tr.ee/S2ayrbx_fL 🙏 Join the conversation on your favorite social network: https://linktr.ee/theignitepodcast

  continue reading

166 episodes

Artwork
iconShare
 
Manage episode 487747051 series 3515266
Content provided by Brian Bell. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Brian Bell or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Minha Hwang is a principal applied scientist with a rare blend of technical depth and cross-disciplinary experience—holding dual PhDs in materials science and marketing science. He’s spent his career at the intersection of machine learning, experimentation, and causal inference, helping scale some of the most sophisticated AI evaluation systems in the world. Prior to his current work, Minha helped launch a data science arm within a major consulting firm and served as a business school professor focused on marketing analytics and statistical decision-making.In this episode, Minha shares why false positives are rampant in real-world experiments, how to make A/B testing more sensitive and reliable, and why most machine learning teams overlook the power of causal inference. We also explore the growing importance of reinforcement learning, open-weight models, and how to evaluate AI when traditional metrics fall short.In Today’s Episode We Discuss:00:00 Intro00:40 Minha’s Engineering Roots and PhD at MIT01:55 Jumping from Engineering to Consulting at McKinsey03:15 Why He Went Back for a Second PhD04:35 Transition from Academia to Applied Data Science06:00 Building McKinsey’s Data Science Arm07:30 Moving to Microsoft to Explore Unstructured Data08:40 Making A/B Testing More Sensitive with ML10:00 Why False Positives Are a Massive Problem11:05 How to Validate Experiments Through “Solidification”12:10 The Importance of Proxy and Debugging Metrics13:35 Model Compression and Quantization Explained15:00 Balancing Statistical Rigor with Product Speed16:30 Why Data, Not Model Training, Is the Bottleneck18:00 Causal Inference vs. Machine Learning20:00 Measuring What You Can’t Observe21:15 The Missing Role of Causality in AI Education22:15 Reinforcement Learning and the Data Scarcity Problem23:40 The Rise of Open-Weight Models Like DeepSeek25:00 Can Open Source Overtake Closed Labs?26:15 IP Grey Areas in Foundation Model Training27:35 Multimodal Models and the Future of Robotics29:20 Simulated Environments and Physical AI30:25 AGI, Overfitting, and the Benchmark Illusion32:00 Practical Usefulness over Philosophical Debates33:25 Most Underrated Metrics in A/B Testing34:35 Favorite AI Papers and Experimentation Tools36:30 Measuring Preferences with Discrete Choice Models36:55 OutroSubscribe on Spotify:https://open.spotify.com/show/6Ga6v0YUsHotLhjap67uu5Subscribe on Apple Podcasts:https://podcasts.apple.com/us/podcast/ignite-conversations-on-startups-venture-capital-tech/id1709248824Follow Brian Bell on X:https://x.com/brianrbell?lang=enFollow Minha Hwang on LinkedIn:https://www.linkedin.com/in/minha-hwang-7440771/Follow Minha Hwang on Twitter:https://x.com/minhahwangVisit Our Website:https://www.teamignite.ventures/👂🎧 Watch, listen, and follow on your favorite platform: https://tr.ee/S2ayrbx_fL 🙏 Join the conversation on your favorite social network: https://linktr.ee/theignitepodcast

  continue reading

166 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play