Artwork

Content provided by S&P Global Market Intelligence and P Global Market Intelligence. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by S&P Global Market Intelligence and P Global Market Intelligence or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Ethical AI

35:15
 
Share
 

Manage episode 456417947 series 2877784
Content provided by S&P Global Market Intelligence and P Global Market Intelligence. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by S&P Global Market Intelligence and P Global Market Intelligence or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In the rush to deliver data to AI projects, it’s all to easy for teams to pull data that’s most easily accessible, without given consideration to its nature and scope. Emily Jasper and Abby Simmons return to discuss ethical concerns about the data that feeds AI projects with host Eric Hanselman. AI implementations place a much greater burden on data quality than traditional IT projects. When data becomes the product, development practices, such as minimum viable product (MVP) releases, require that data be held to a much higher quality standard to address ethical concerns about its suitability. If a dataset contains bias or lacks representation for the community it serves, it will not only fall short in function, but can reinforce the bias and errors in the data. In effect, it becomes its own data poisoning attack, one of the key security concerns in AI applications.

Ethical approaches to AI applications have to focus on ensuring that outputs reflect the diverse nature of society and move beyond a narrow, middle of the road, average. They have to integrate perspectives and feedback from the full spectrum of the society they claim to represent. It involves additional work to achieve this and it can pay off in the expanded market it gives access to. At the same time, organizations need to put their capabilities to work to serve those parts of their community that don’t have access to AI’s benefits. This can help to keep marginalized segments of society from being left behind, in what is becoming the next chasm in the digital divide.

More S&P Global Content:

Credits:

Other Resources:

  continue reading

102 episodes

Artwork

Ethical AI

Next in Tech

13 subscribers

published

iconShare
 
Manage episode 456417947 series 2877784
Content provided by S&P Global Market Intelligence and P Global Market Intelligence. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by S&P Global Market Intelligence and P Global Market Intelligence or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In the rush to deliver data to AI projects, it’s all to easy for teams to pull data that’s most easily accessible, without given consideration to its nature and scope. Emily Jasper and Abby Simmons return to discuss ethical concerns about the data that feeds AI projects with host Eric Hanselman. AI implementations place a much greater burden on data quality than traditional IT projects. When data becomes the product, development practices, such as minimum viable product (MVP) releases, require that data be held to a much higher quality standard to address ethical concerns about its suitability. If a dataset contains bias or lacks representation for the community it serves, it will not only fall short in function, but can reinforce the bias and errors in the data. In effect, it becomes its own data poisoning attack, one of the key security concerns in AI applications.

Ethical approaches to AI applications have to focus on ensuring that outputs reflect the diverse nature of society and move beyond a narrow, middle of the road, average. They have to integrate perspectives and feedback from the full spectrum of the society they claim to represent. It involves additional work to achieve this and it can pay off in the expanded market it gives access to. At the same time, organizations need to put their capabilities to work to serve those parts of their community that don’t have access to AI’s benefits. This can help to keep marginalized segments of society from being left behind, in what is becoming the next chasm in the digital divide.

More S&P Global Content:

Credits:

Other Resources:

  continue reading

102 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play