Artwork

Content provided by Neil C. Hughes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Neil C. Hughes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

3288: MLPerf vs Moore’s Law: Redefining AI Progress

39:13
 
Share
 

Manage episode 484467050 series 80936
Content provided by Neil C. Hughes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Neil C. Hughes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What happens when the world's most powerful AI systems are measured by the same yardstick?

In this episode of Tech Talks Daily, I spoke with David Kanter, Founder and Executive Director of MLCommons, the organization behind MLPerf, the industry's most recognized benchmark for AI performance. As AI continues to outpace Moore’s Law, businesses and governments alike are asking the same question: how do we know what “good” AI performance really looks like? That’s exactly the challenge MLCommons set out to address.

David shares the story of how a simple suggestion at a Stanford meeting led him from analyst to the architect of a global benchmarking initiative. He explains how MLPerf benchmarks are helping enterprises and policymakers make informed decisions about AI systems, and why transparency, neutrality, and open collaboration are central to the mission.

We explore what’s really driving AI’s explosive growth. It’s not just about chips. Smarter software, algorithmic breakthroughs, and increasingly scalable system designs are all contributing to performance improvements far beyond what Moore’s Law predicted.

But AI’s rapid progress comes with a cost. Power consumption is quickly becoming one of the biggest challenges in the industry. David explains how MLCommons is helping address this with MLPerf Power and why infrastructure innovations like low-precision computation, advanced cooling, and even proximity to power generation are gaining traction.

We also talk about the decision by some major vendors not to participate in MLPerf. David offers perspective on what that means for buyers and why benchmark transparency should be part of any enterprise AI procurement conversation.

Beyond the data center, MLCommons is now benchmarking AI performance on consumer hardware through MLPerf Client and is working on domain-specific efforts such as MLPerf Automotive. As AI shows up in smartphones, vehicles, and smart devices, the need for clear, fair, and relevant performance measurement is only growing.

So how do we measure AI that is everywhere? What should buyers demand from vendors? And how can the industry ensure that AI systems are fast, efficient, and accountable? Let’s find out.

  continue reading

2046 episodes

Artwork
iconShare
 
Manage episode 484467050 series 80936
Content provided by Neil C. Hughes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Neil C. Hughes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What happens when the world's most powerful AI systems are measured by the same yardstick?

In this episode of Tech Talks Daily, I spoke with David Kanter, Founder and Executive Director of MLCommons, the organization behind MLPerf, the industry's most recognized benchmark for AI performance. As AI continues to outpace Moore’s Law, businesses and governments alike are asking the same question: how do we know what “good” AI performance really looks like? That’s exactly the challenge MLCommons set out to address.

David shares the story of how a simple suggestion at a Stanford meeting led him from analyst to the architect of a global benchmarking initiative. He explains how MLPerf benchmarks are helping enterprises and policymakers make informed decisions about AI systems, and why transparency, neutrality, and open collaboration are central to the mission.

We explore what’s really driving AI’s explosive growth. It’s not just about chips. Smarter software, algorithmic breakthroughs, and increasingly scalable system designs are all contributing to performance improvements far beyond what Moore’s Law predicted.

But AI’s rapid progress comes with a cost. Power consumption is quickly becoming one of the biggest challenges in the industry. David explains how MLCommons is helping address this with MLPerf Power and why infrastructure innovations like low-precision computation, advanced cooling, and even proximity to power generation are gaining traction.

We also talk about the decision by some major vendors not to participate in MLPerf. David offers perspective on what that means for buyers and why benchmark transparency should be part of any enterprise AI procurement conversation.

Beyond the data center, MLCommons is now benchmarking AI performance on consumer hardware through MLPerf Client and is working on domain-specific efforts such as MLPerf Automotive. As AI shows up in smartphones, vehicles, and smart devices, the need for clear, fair, and relevant performance measurement is only growing.

So how do we measure AI that is everywhere? What should buyers demand from vendors? And how can the industry ensure that AI systems are fast, efficient, and accountable? Let’s find out.

  continue reading

2046 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play