The Official AWS Podcast is a podcast for developers and IT professionals looking for the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Hawn Nguyen-Loughren for regular updates, deep dives, launches, and interviews. Whether you’re training machine learning models, developing open source projects, or building cloud solutions, the Official AWS Podcast has something for you.
…
continue reading
Content provided by Sébastien Stormacq and Amazon Web Services. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sébastien Stormacq and Amazon Web Services or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
3 ways to deploy your large language models on AWS
MP3•Episode home
Manage episode 481599939 series 3636979
Content provided by Sébastien Stormacq and Amazon Web Services. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sébastien Stormacq and Amazon Web Services or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
In this episode of the AWS Developers Podcast, we dive into the different ways to deploy large language models (LLMs) on AWS. From self-managed deployments on EC2 to fully managed services like SageMaker and Bedrock, we break down the pros and cons of each approach. Whether you're optimizing for compliance, cost, or time-to-market, we explore the trade-offs between flexibility and simplicity. You'll hear practical insights into instance selection, infrastructure management, model sizing, and prototyping strategies. We also examine how services like SageMaker Jumpstart and serverless architectures like Bedrock can streamline your machine learning workflows. If you're building or scaling AI applications in the cloud, this episode will help you navigate your options and design a deployment strategy that fits your needs.
…
continue reading
With Germaine Ong, Startup Solution Architect ; With Jarett Yeo, Startup Solution Architect
165 episodes
MP3•Episode home
Manage episode 481599939 series 3636979
Content provided by Sébastien Stormacq and Amazon Web Services. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sébastien Stormacq and Amazon Web Services or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
In this episode of the AWS Developers Podcast, we dive into the different ways to deploy large language models (LLMs) on AWS. From self-managed deployments on EC2 to fully managed services like SageMaker and Bedrock, we break down the pros and cons of each approach. Whether you're optimizing for compliance, cost, or time-to-market, we explore the trade-offs between flexibility and simplicity. You'll hear practical insights into instance selection, infrastructure management, model sizing, and prototyping strategies. We also examine how services like SageMaker Jumpstart and serverless architectures like Bedrock can streamline your machine learning workflows. If you're building or scaling AI applications in the cloud, this episode will help you navigate your options and design a deployment strategy that fits your needs.
…
continue reading
With Germaine Ong, Startup Solution Architect ; With Jarett Yeo, Startup Solution Architect
165 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.