Go offline with the Player FM app!
Packing Large AI Into Small Embedded Systems
Manage episode 490552940 series 3545804
Not every microcontroller can handle artificial intelligence and machine learning (AI/ML) chores. Simplifying the models is one way to squeeze algorithms into a more compact embedded compute engine. Another way is to pair it with an AI accelerator like Femtosense’s Sparse Processing Unit (SPU) SPU-001 and take advantage of sparsity in AI/ML models.
In this episode, Sam Fok, CEO at Femtosense, talks about AI/ML on the edge, the company's dual sparsity design, and how the small, low power SPU-001 can augment a host processor.
67 episodes
Manage episode 490552940 series 3545804
Not every microcontroller can handle artificial intelligence and machine learning (AI/ML) chores. Simplifying the models is one way to squeeze algorithms into a more compact embedded compute engine. Another way is to pair it with an AI accelerator like Femtosense’s Sparse Processing Unit (SPU) SPU-001 and take advantage of sparsity in AI/ML models.
In this episode, Sam Fok, CEO at Femtosense, talks about AI/ML on the edge, the company's dual sparsity design, and how the small, low power SPU-001 can augment a host processor.
67 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.