Artwork

Content provided by EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI in 2025 – A global perspective, with Kai-Fu Lee

50:23
 
Share
 

Manage episode 458877467 series 2498265
Content provided by EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Kai-Fu Lee joins me to discuss AI in 2025. Kai-Fu is a storied AI researcher, investor, inventor and entrepreneur based in Taiwan. As one of the leading AI experts based in Asia, I wanted to get his take on this particular market.

Key insights:

  • Kai-Fu noted that unlike the singular “ChatGPT moment” that stunned Western audiences, the Chinese market encountered generative AI in a more “incremental and distributed” fashion.
  • A particularly fascinating shift is how Chinese enterprises are adopting generative AI. Without the entrenched SaaS layers common in the US, Chinese companies are “rolling their own” solutions. This deep integration might be tougher and messier, but it encourages thorough, domain-specific implementations.
  • We reflected on a structural shift in how we think about productivity software. With AI “conceptualizing” the document and the user providing strategic nudges, it’s akin to reversing the traditional creative process.
  • We’re moving from a training-centric world to an inference-centric one. Models need to be cheaper, faster and less resource-intensive to run, not just to train. For instance, his team at ZeroOne.ai managed to train a top-tier model on “just” 2,000 H100 GPUs and bring inference costs down to 10 cents per million tokens—a fraction of GPT-4’s early costs.
  • In 2025, Kai-Fu predicts, we’ll see fewer “demos” and more “AI-first” applications deploying text, image and video generation tools into real-world workflows.

Connect with us:

  continue reading

195 episodes

Artwork
iconShare
 
Manage episode 458877467 series 2498265
Content provided by EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Kai-Fu Lee joins me to discuss AI in 2025. Kai-Fu is a storied AI researcher, investor, inventor and entrepreneur based in Taiwan. As one of the leading AI experts based in Asia, I wanted to get his take on this particular market.

Key insights:

  • Kai-Fu noted that unlike the singular “ChatGPT moment” that stunned Western audiences, the Chinese market encountered generative AI in a more “incremental and distributed” fashion.
  • A particularly fascinating shift is how Chinese enterprises are adopting generative AI. Without the entrenched SaaS layers common in the US, Chinese companies are “rolling their own” solutions. This deep integration might be tougher and messier, but it encourages thorough, domain-specific implementations.
  • We reflected on a structural shift in how we think about productivity software. With AI “conceptualizing” the document and the user providing strategic nudges, it’s akin to reversing the traditional creative process.
  • We’re moving from a training-centric world to an inference-centric one. Models need to be cheaper, faster and less resource-intensive to run, not just to train. For instance, his team at ZeroOne.ai managed to train a top-tier model on “just” 2,000 H100 GPUs and bring inference costs down to 10 cents per million tokens—a fraction of GPT-4’s early costs.
  • In 2025, Kai-Fu predicts, we’ll see fewer “demos” and more “AI-first” applications deploying text, image and video generation tools into real-world workflows.

Connect with us:

  continue reading

195 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play