Artwork

Content provided by Modern Web. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Modern Web or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Fluid Compute: Vercel’s Next Step in the Evolution of Serverless?

32:58
 
Share
 

Manage episode 466541729 series 2927306
Content provided by Modern Web. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Modern Web or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In this episode of the Modern Web Podcast, hosts Rob Ocel and Danny Thompson sit down with Mariano Cocirio, Staff Product Manager at Vercel, to discuss Fluid Compute, a new cloud computing model that blends the best of serverless scalability with traditional server efficiency. They explore the challenges of AI workloads in serverless environments, the high costs of idle time, and how Fluid Compute optimizes execution to reduce costs while maintaining performance. Mariano explains how this approach allows instances to handle multiple requests efficiently while still scaling to zero when not in use. The conversation also covers what developers need to consider when adopting this model, the impact on application architecture, and how to track efficiency gains using Vercel’s observability tools.Is Fluid Compute the next step in the evolution of serverless? Is it redefining cloud infrastructure altogether?

Keypoints

  • Fluid Compute merges the best of servers and serverless – It combines the scalability of serverless with the efficiency and reusability of traditional servers, allowing instances to handle multiple requests while still scaling down to zero.
  • AI workloads struggle with traditional serverless models – Serverless is optimized for quick, stateless functions, but AI models often require long processing times, leading to high costs for idle time. Fluid Compute solves this by dynamically managing resources.
  • No major changes required for developers – Fluid Compute works like a standard Node or Python server, meaning developers don’t need to change their code significantly. The only consideration is handling shared global state, similar to a traditional server environment.
  • Significant cost savings and efficiency improvements – Vercel’s observability tools show real-time reductions in compute costs, with some early adopters seeing up to 85% savings simply by enabling Fluid Compute.

Chapters

0:00 – Introduction and Guest Welcome

1:08 – What is Fluid Compute? Overview and Key Features

2:08 – Why Serverless Compute Struggles with AI Workloads

4:00 – Fluid Compute: Combining Scalability and Efficiency

6:04 – Cost Savings and Real-world Impact of Fluid Compute

8:12 – Developer Experience and Implementation Considerations

10:26 – Managing Global State and Concurrency in Fluid Compute

13:09 – Observability Tools for Performance and Cost Monitoring

20:01 – Long-running Instances and Post-operation Execution

24:02 – Evolution of Compute Models: From Servers to Fluid Compute

29:08 – The Future of Fluid Compute and Web Development

30:15 – How to Enable Fluid Compute on Vercel

32:04 – Closing Remarks and Guest Social Media Info

Follow Mariano Cocirio on Social Media:Twitter:https://x.com/mcocirio

Linkedin:https://www.linkedin.com/in/mcocirio/

Sponsored by This Dot:thisdot.co

  continue reading

158 episodes

Artwork
iconShare
 
Manage episode 466541729 series 2927306
Content provided by Modern Web. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Modern Web or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In this episode of the Modern Web Podcast, hosts Rob Ocel and Danny Thompson sit down with Mariano Cocirio, Staff Product Manager at Vercel, to discuss Fluid Compute, a new cloud computing model that blends the best of serverless scalability with traditional server efficiency. They explore the challenges of AI workloads in serverless environments, the high costs of idle time, and how Fluid Compute optimizes execution to reduce costs while maintaining performance. Mariano explains how this approach allows instances to handle multiple requests efficiently while still scaling to zero when not in use. The conversation also covers what developers need to consider when adopting this model, the impact on application architecture, and how to track efficiency gains using Vercel’s observability tools.Is Fluid Compute the next step in the evolution of serverless? Is it redefining cloud infrastructure altogether?

Keypoints

  • Fluid Compute merges the best of servers and serverless – It combines the scalability of serverless with the efficiency and reusability of traditional servers, allowing instances to handle multiple requests while still scaling down to zero.
  • AI workloads struggle with traditional serverless models – Serverless is optimized for quick, stateless functions, but AI models often require long processing times, leading to high costs for idle time. Fluid Compute solves this by dynamically managing resources.
  • No major changes required for developers – Fluid Compute works like a standard Node or Python server, meaning developers don’t need to change their code significantly. The only consideration is handling shared global state, similar to a traditional server environment.
  • Significant cost savings and efficiency improvements – Vercel’s observability tools show real-time reductions in compute costs, with some early adopters seeing up to 85% savings simply by enabling Fluid Compute.

Chapters

0:00 – Introduction and Guest Welcome

1:08 – What is Fluid Compute? Overview and Key Features

2:08 – Why Serverless Compute Struggles with AI Workloads

4:00 – Fluid Compute: Combining Scalability and Efficiency

6:04 – Cost Savings and Real-world Impact of Fluid Compute

8:12 – Developer Experience and Implementation Considerations

10:26 – Managing Global State and Concurrency in Fluid Compute

13:09 – Observability Tools for Performance and Cost Monitoring

20:01 – Long-running Instances and Post-operation Execution

24:02 – Evolution of Compute Models: From Servers to Fluid Compute

29:08 – The Future of Fluid Compute and Web Development

30:15 – How to Enable Fluid Compute on Vercel

32:04 – Closing Remarks and Guest Social Media Info

Follow Mariano Cocirio on Social Media:Twitter:https://x.com/mcocirio

Linkedin:https://www.linkedin.com/in/mcocirio/

Sponsored by This Dot:thisdot.co

  continue reading

158 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play