Artwork

Content provided by Tejas Kumar. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tejas Kumar or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Shivay Lamba: How to run secure AI anywhere with WebAssembly

1:33:49
 
Share
 

Manage episode 493171441 series 3676184
Content provided by Tejas Kumar. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tejas Kumar or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Links

- CodeCrafters (partner): https://tej.as/codecrafters

- WebAssembly on Kubernetes: https://www.cncf.io/blog/2024/03/12/webassembly-on-kubernetes-from-containers-to-wasm-part-01/

- Shivay on X: https://x.com/howdevelop

- Tejas on X: https://x.com/tejaskumar_


Summary


In this podcast episode, Shivay Lamba and I discuss the integration of WebAssembly with AI and machine learning, exploring its implications for developers. We dive into the benefits of running machine learning models in the browser, the significance of edge computing, and the performance advantages of WebAssembly over traditional serverless architectures. The conversation also touches on emerging hardware solutions for AI inference and the importance of accessibility in software development. Shivay shares insights on how developers can leverage these technologies to build efficient and privacy-focused applications.


Chapters


00:00 Shivay Lamba

03:02 Introduction and Background

06:02 WebAssembly and AI Integration

08:47 Machine Learning on the Edge

11:43 Privacy and Data Security in AI

15:00 Quantization and Model Optimization

17:52 Tools for Running AI Models in the Browser

32:13 Understanding TensorFlow.js and Its Architecture

37:58 Custom Operations and Model Compatibility

41:56 Overcoming Limitations in JavaScript ML Workloads

46:00 Demos and Practical Applications of TensorFlow.js

54:22 Server-Side AI Inference with WebAssembly

01:02:42 Building AI Inference APIs with WebAssembly

01:04:39 WebAssembly and Machine Learning Inference

01:10:56 Summarizing the Benefits of WebAssembly for Developers

01:15:43 Learning Curve for Developers in Machine Learning

01:21:10 Hardware Considerations for WebAssembly and AI

01:27:35 Comparing Inference Speeds of AI Models


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

88 episodes

Artwork
iconShare
 
Manage episode 493171441 series 3676184
Content provided by Tejas Kumar. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tejas Kumar or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Links

- CodeCrafters (partner): https://tej.as/codecrafters

- WebAssembly on Kubernetes: https://www.cncf.io/blog/2024/03/12/webassembly-on-kubernetes-from-containers-to-wasm-part-01/

- Shivay on X: https://x.com/howdevelop

- Tejas on X: https://x.com/tejaskumar_


Summary


In this podcast episode, Shivay Lamba and I discuss the integration of WebAssembly with AI and machine learning, exploring its implications for developers. We dive into the benefits of running machine learning models in the browser, the significance of edge computing, and the performance advantages of WebAssembly over traditional serverless architectures. The conversation also touches on emerging hardware solutions for AI inference and the importance of accessibility in software development. Shivay shares insights on how developers can leverage these technologies to build efficient and privacy-focused applications.


Chapters


00:00 Shivay Lamba

03:02 Introduction and Background

06:02 WebAssembly and AI Integration

08:47 Machine Learning on the Edge

11:43 Privacy and Data Security in AI

15:00 Quantization and Model Optimization

17:52 Tools for Running AI Models in the Browser

32:13 Understanding TensorFlow.js and Its Architecture

37:58 Custom Operations and Model Compatibility

41:56 Overcoming Limitations in JavaScript ML Workloads

46:00 Demos and Practical Applications of TensorFlow.js

54:22 Server-Side AI Inference with WebAssembly

01:02:42 Building AI Inference APIs with WebAssembly

01:04:39 WebAssembly and Machine Learning Inference

01:10:56 Summarizing the Benefits of WebAssembly for Developers

01:15:43 Learning Curve for Developers in Machine Learning

01:21:10 Hardware Considerations for WebAssembly and AI

01:27:35 Comparing Inference Speeds of AI Models


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

88 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play