Go offline with the Player FM app!
What runs ChatGPT, Sora, DeepSeek & Llama on Azure? (feat. Mark Russinovich)
Manage episode 485545173 series 2391604
Build and run your AI apps and agents at scale with Azure. Orchestrate multi-agent apps and high-scale inference solutions using open-source and proprietary models, no infrastructure management needed. With Azure, connect frameworks like Semantic Kernel to models from DeepSeek, Llama, OpenAI’s GPT-4o, and Sora, without provisioning GPUs or writing complex scheduling logic. Just submit your prompt and assets, and the models do the rest.
Using Azure’s Model as a Service, access cutting-edge models, including brand-new releases like DeepSeek R1 and Sora, as managed APIs with autoscaling and built-in security. Whether you’re handling bursts of demand, fine-tuning models, or provisioning compute, Azure provides the capacity, efficiency, and flexibility you need. With industry-leading AI silicon, including H100s, GB200s, and advanced cooling, your solutions can run with the same power and scale behind ChatGPT.
Mark Russinovich, Azure CTO, Deputy CISO, and Microsoft Technical Fellow, joins Jeremy Chapman to share how Azure's latest AI advancements and orchestration capabilities unlock new possibilities for developers.
► QUICK LINKS:
00:00 - Build and run AI apps and agents in Azure
00:26 - Narrated video generation example with multi-agentic, multi-model app
03:17 - Model as a Service in Azure
04:02 - Scale and performance
04:55 - Enterprise grade security
05:17 - Latest AI silicon available on Azure
06:29 - Inference at scale
07:27 - Everyday AI and agentic solutions
08:36 - Provisioned Throughput
10:55 - Fractional GPU Allocation
12:13 - What's next for Azure?
12:44 - Wrap up
► Link References
For more information, check out https://aka.ms/AzureAI
► Unfamiliar with Microsoft Mechanics?
As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
• Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
• Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
• Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast
► Keep getting this insider knowledge, join us on social:
• Follow us on Twitter: https://twitter.com/MSFTMechanics
• Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
• Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
• Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
211 episodes
Manage episode 485545173 series 2391604
Build and run your AI apps and agents at scale with Azure. Orchestrate multi-agent apps and high-scale inference solutions using open-source and proprietary models, no infrastructure management needed. With Azure, connect frameworks like Semantic Kernel to models from DeepSeek, Llama, OpenAI’s GPT-4o, and Sora, without provisioning GPUs or writing complex scheduling logic. Just submit your prompt and assets, and the models do the rest.
Using Azure’s Model as a Service, access cutting-edge models, including brand-new releases like DeepSeek R1 and Sora, as managed APIs with autoscaling and built-in security. Whether you’re handling bursts of demand, fine-tuning models, or provisioning compute, Azure provides the capacity, efficiency, and flexibility you need. With industry-leading AI silicon, including H100s, GB200s, and advanced cooling, your solutions can run with the same power and scale behind ChatGPT.
Mark Russinovich, Azure CTO, Deputy CISO, and Microsoft Technical Fellow, joins Jeremy Chapman to share how Azure's latest AI advancements and orchestration capabilities unlock new possibilities for developers.
► QUICK LINKS:
00:00 - Build and run AI apps and agents in Azure
00:26 - Narrated video generation example with multi-agentic, multi-model app
03:17 - Model as a Service in Azure
04:02 - Scale and performance
04:55 - Enterprise grade security
05:17 - Latest AI silicon available on Azure
06:29 - Inference at scale
07:27 - Everyday AI and agentic solutions
08:36 - Provisioned Throughput
10:55 - Fractional GPU Allocation
12:13 - What's next for Azure?
12:44 - Wrap up
► Link References
For more information, check out https://aka.ms/AzureAI
► Unfamiliar with Microsoft Mechanics?
As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
• Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
• Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
• Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast
► Keep getting this insider knowledge, join us on social:
• Follow us on Twitter: https://twitter.com/MSFTMechanics
• Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
• Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
• Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
211 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.