Artwork

Content provided by The Oakmont Group and John Gilroy. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Oakmont Group and John Gilroy or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Ep. 228 Building Trust in Agents: How Salesforce Powers Secure AI

23:10
 
Share
 

Manage episode 474567619 series 3610832
Content provided by The Oakmont Group and John Gilroy. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Oakmont Group and John Gilroy or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/

Want to listen to other episodes? www.Federaltechpodcast.com

Federal leaders are walking a tightrope. They want to leverage the promise of AI; however, they are responsible for making federal data secure. Beyond that, these AI “experiments” should not negatively impact the larger systems and must have a detached view of practical applications.

During today’s conversation, Paul Tatum gives his view on accomplishing this balance.

He illustrates the idea of experimenting with AI through, of all things, avocados. For example, he acts as if he must document the process behind importing avocados. He shows how an AI agent can be used safely and provides practical information.

The key here is “safely.” People working on federal systems are jumping into AI agents without concern for compliance or security. They run into the phrase “unintended consequences” when they access data sloppily, which can lead to sensitive information leaks.

Rather than detailing potential abuse, Paul Tatum outlines the Salesforce approach. This allows experimentation with specific guidelines as well as for compliance and controls for autonomous agents.

This way, the data to be accessed will be cleaned and not subject to misinformation and duplication problems. Further, because you are acting in the functional equivalent of a “sandbox,” you can be assured that information assembled from AI experiments will be placed in areas where they are safe and secure.

Learn how to leverage AI, but learn in an environment where mistakes will not come back to haunt you.

  continue reading

243 episodes

Artwork
iconShare
 
Manage episode 474567619 series 3610832
Content provided by The Oakmont Group and John Gilroy. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Oakmont Group and John Gilroy or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/

Want to listen to other episodes? www.Federaltechpodcast.com

Federal leaders are walking a tightrope. They want to leverage the promise of AI; however, they are responsible for making federal data secure. Beyond that, these AI “experiments” should not negatively impact the larger systems and must have a detached view of practical applications.

During today’s conversation, Paul Tatum gives his view on accomplishing this balance.

He illustrates the idea of experimenting with AI through, of all things, avocados. For example, he acts as if he must document the process behind importing avocados. He shows how an AI agent can be used safely and provides practical information.

The key here is “safely.” People working on federal systems are jumping into AI agents without concern for compliance or security. They run into the phrase “unintended consequences” when they access data sloppily, which can lead to sensitive information leaks.

Rather than detailing potential abuse, Paul Tatum outlines the Salesforce approach. This allows experimentation with specific guidelines as well as for compliance and controls for autonomous agents.

This way, the data to be accessed will be cleaned and not subject to misinformation and duplication problems. Further, because you are acting in the functional equivalent of a “sandbox,” you can be assured that information assembled from AI experiments will be placed in areas where they are safe and secure.

Learn how to leverage AI, but learn in an environment where mistakes will not come back to haunt you.

  continue reading

243 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play