Artwork

Content provided by EDGE AI FOUNDATION. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by EDGE AI FOUNDATION or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Dan Cooley of Silicon Labs - The 30 Billion Dollar Question: Can AI Truly Live on the Edge?

23:59
 
Share
 

Manage episode 478809897 series 3574631
Content provided by EDGE AI FOUNDATION. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by EDGE AI FOUNDATION or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Imagine a world where your smart glasses don't just identify objects but tell stories about what they see—all while running on a tiny battery without heating up. This cutting-edge vision is becoming reality as semiconductor companies tackle the monumental challenge of bringing generative AI capabilities from massive cloud data centers down to microcontroller-sized devices.
The semiconductor industry stands at a fascinating crossroads where artificial intelligence capabilities are pushing beyond traditional cloud environments into battery-powered edge devices. As our podcast guest explains, this transition faces substantial hurdles: while cloud-based models expand from millions to trillions of parameters, embedded systems must dramatically reduce their footprint from terabytes to gigabytes while still delivering meaningful AI functionality. With projections showing IoT devices consuming over 30 terabit hours of power by 2030 and generating 300 zettabytes of data, the need for local processing has never been more urgent.
For developers creating wearable technology like smart eyewear, constraints become particularly challenging. Weight distribution, battery life, and computing power must all be carefully balanced while maintaining comfort and style. The hardware architecture required for these applications demands innovative approaches: shared bus fabrics that enable different execution environments, strategic power management that activates high-performance cores only when needed, and neural processing units capable of handling transformer operations for generative AI workloads. Most impressively, current implementations demonstrate YOLO object detection running at just 60 milliamps—easily within battery operation parameters.
The $30 billion embedded AI market represents a tremendous opportunity for innovation, but also requires robust software ecosystems that help traditional microcontroller customers without AI expertise navigate this complex landscape. As next-generation devices begin supporting generative capabilities alongside traditional CNN and RNN networks, we're witnessing the dawn of truly seamless human-machine interfaces. Ready to explore how these technologies might transform your industry? Listen now to understand the future of computing at the edge.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

  continue reading

Chapters

1. Dan Cooley of Silicon Labs - The 30 Billion Dollar Question: Can AI Truly Live on the Edge? (00:00:00)

2. Promises of Generative AI at the Edge (00:00:08)

3. Growing Demand for Embedded AI (00:01:42)

4. Scale of Data and Power Challenges (00:05:11)

5. Market Growth and Use Cases (00:08:31)

6. Architectural Challenges in Small Devices (00:11:27)

7. Building an Enabling Software Ecosystem (00:16:09)

8. Next-Generation Solutions and Demonstrations (00:20:22)

37 episodes

Artwork
iconShare
 
Manage episode 478809897 series 3574631
Content provided by EDGE AI FOUNDATION. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by EDGE AI FOUNDATION or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Imagine a world where your smart glasses don't just identify objects but tell stories about what they see—all while running on a tiny battery without heating up. This cutting-edge vision is becoming reality as semiconductor companies tackle the monumental challenge of bringing generative AI capabilities from massive cloud data centers down to microcontroller-sized devices.
The semiconductor industry stands at a fascinating crossroads where artificial intelligence capabilities are pushing beyond traditional cloud environments into battery-powered edge devices. As our podcast guest explains, this transition faces substantial hurdles: while cloud-based models expand from millions to trillions of parameters, embedded systems must dramatically reduce their footprint from terabytes to gigabytes while still delivering meaningful AI functionality. With projections showing IoT devices consuming over 30 terabit hours of power by 2030 and generating 300 zettabytes of data, the need for local processing has never been more urgent.
For developers creating wearable technology like smart eyewear, constraints become particularly challenging. Weight distribution, battery life, and computing power must all be carefully balanced while maintaining comfort and style. The hardware architecture required for these applications demands innovative approaches: shared bus fabrics that enable different execution environments, strategic power management that activates high-performance cores only when needed, and neural processing units capable of handling transformer operations for generative AI workloads. Most impressively, current implementations demonstrate YOLO object detection running at just 60 milliamps—easily within battery operation parameters.
The $30 billion embedded AI market represents a tremendous opportunity for innovation, but also requires robust software ecosystems that help traditional microcontroller customers without AI expertise navigate this complex landscape. As next-generation devices begin supporting generative capabilities alongside traditional CNN and RNN networks, we're witnessing the dawn of truly seamless human-machine interfaces. Ready to explore how these technologies might transform your industry? Listen now to understand the future of computing at the edge.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

  continue reading

Chapters

1. Dan Cooley of Silicon Labs - The 30 Billion Dollar Question: Can AI Truly Live on the Edge? (00:00:00)

2. Promises of Generative AI at the Edge (00:00:08)

3. Growing Demand for Embedded AI (00:01:42)

4. Scale of Data and Power Challenges (00:05:11)

5. Market Growth and Use Cases (00:08:31)

6. Architectural Challenges in Small Devices (00:11:27)

7. Building an Enabling Software Ecosystem (00:16:09)

8. Next-Generation Solutions and Demonstrations (00:20:22)

37 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play