Artwork

Content provided by Jerry Cuomo. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jerry Cuomo or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

[TRENDS] Foundation Models

16:52
 
Share
 

Manage episode 349691922 series 2876740
Content provided by Jerry Cuomo. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jerry Cuomo or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Now Live! This special *trends* episode is on Foundation Models. Listen in as host Jerry Cuomo is joined by fellow IBM Consulting Fellow, Blaine Dolph, for a discussion on this exciting emerging trend in artificial intelligence.

Blaine provides a definition of Foundation Models including what makes them a breakthrough worth paying attention to in the months to come. Jerry and Blaine share several examples using the OpenAI playground, and with a little help from Digital Jerry (DJ), demonstrate how these models can be applied to automate aspects of your everyday work life. Blaine discusses how early examples of models, like GPT-3, BERT, or DALL-E 2, have shown what’s possible. Input a short prompt, and the system generates an entire essay, or a complex image, based on your parameters, even if it wasn’t specifically trained on how to execute that exact argument or generate an image in that way.

What exactly is a foundation model, you ask?

A foundation model is a deep learning algorithm that has been pre-trained with extremely large data sets. In many cases the data is scraped from the public internet including Wikipedia and GitHub.

Unlike narrow artificial intelligence (narrow AI) models that are trained to perform a single task, foundation models are trained with a wide variety of data and can transfer knowledge from one task to another. This type of large-scale neural network can be trained once and then fine-tuned to complete different types of tasks.

Foundation models contain hundreds of billions of hyperparameters that have been trained with hundreds of gigabytes of data. Once completed, however, each foundation model can be modified an unlimited number of times to automate a wide variety of discrete tasks.

  continue reading

74 episodes

Artwork
iconShare
 
Manage episode 349691922 series 2876740
Content provided by Jerry Cuomo. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jerry Cuomo or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Now Live! This special *trends* episode is on Foundation Models. Listen in as host Jerry Cuomo is joined by fellow IBM Consulting Fellow, Blaine Dolph, for a discussion on this exciting emerging trend in artificial intelligence.

Blaine provides a definition of Foundation Models including what makes them a breakthrough worth paying attention to in the months to come. Jerry and Blaine share several examples using the OpenAI playground, and with a little help from Digital Jerry (DJ), demonstrate how these models can be applied to automate aspects of your everyday work life. Blaine discusses how early examples of models, like GPT-3, BERT, or DALL-E 2, have shown what’s possible. Input a short prompt, and the system generates an entire essay, or a complex image, based on your parameters, even if it wasn’t specifically trained on how to execute that exact argument or generate an image in that way.

What exactly is a foundation model, you ask?

A foundation model is a deep learning algorithm that has been pre-trained with extremely large data sets. In many cases the data is scraped from the public internet including Wikipedia and GitHub.

Unlike narrow artificial intelligence (narrow AI) models that are trained to perform a single task, foundation models are trained with a wide variety of data and can transfer knowledge from one task to another. This type of large-scale neural network can be trained once and then fine-tuned to complete different types of tasks.

Foundation models contain hundreds of billions of hyperparameters that have been trained with hundreds of gigabytes of data. Once completed, however, each foundation model can be modified an unlimited number of times to automate a wide variety of discrete tasks.

  continue reading

74 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play