Artwork

Content provided by Turpentine, Erik Torenberg, and Nathan Labenz. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Turpentine, Erik Torenberg, and Nathan Labenz or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

OpenAI's Identity Crisis: History, Culture & Non-Profit Control with ex-employee Steven Adler

2:03:13
 
Share
 

Manage episode 481456376 series 3452589
Content provided by Turpentine, Erik Torenberg, and Nathan Labenz. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Turpentine, Erik Torenberg, and Nathan Labenz or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In this episode, former OpenAI research scientist Steven Adler discusses his insights on OpenAI's transition through various phases, including its growth, internal culture shifts, and the contentious move from nonprofit to for-profit. The conversation delves into the initial days of OpenAI's development of GPT-3 and GPT-4, the cultural and ethical disagreements within the organization, and the recent amicus brief addressing the Elon versus OpenAI lawsuit. Steven Adler also explores the broader implications of AI capabilities, safety evaluations, and the critical need for transparent and responsible AI governance. The episode provides a candid look at the internal dynamics of a leading AI company and offers perspectives on the responsibilities and challenges faced by AI researchers and developers today.

Amicus brief to the Elon Musk versus OpenAI lawsuit: https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.152.0.pdf

Steven Adler's post on 'X' about Personhood credentials (a paper co-authored by him) : https://x.com/sjgadler/status/1824245211322568903

Steven Adler's substack post on "minimum testing period" for frontier AI : https://substack.com/@sjadler/p-161143327?utm_source=profile&utm_medium=reader2

Steven Adler's substack post on TSFT Model Testing: https://substack.com/@sjadler/p-159883282?utm_source=profile&utm_medium=reader2

Steven Adler's Substack: https://stevenadler.substack.com/

Upcoming Major AI Events Featuring Nathan Labenz as a Keynote Speaker

https://www.imagineai.live/

https://adapta.org/adapta-summit

https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/


PRODUCED BY:

https://aipodcast.ing


CHAPTERS:

(00:00) About the Episode

(05:15) Joining OpenAI: Early Days and Cultural Insights

(06:41) The Anthropic Split and Its Impact

(11:32) Product Safety and Content Policies at OpenAI (Part 1)

(19:21) Sponsors: ElevenLabs | Oracle Cloud Infrastructure (OCI)

(21:48) Product Safety and Content Policies at OpenAI (Part 2)

(22:08) The Launch and Impact of GPT-4

(32:15) Evaluating AI Models: Challenges and Best Practices (Part 1)

(33:46) Sponsors: Shopify | NetSuite

(37:10) Evaluating AI Models: Challenges and Best Practices (Part 2)

(55:58) AGI Readiness and Personhood Credentials

(01:05:03) Biometrics and Internet Friction

(01:06:52) Credential Security and Recovery

(01:08:05) Trust and Ecosystem Diversity

(01:09:40) AI Agents and Verification Challenges

(01:14:28) OpenAI's Evolution and Ambitions

(01:22:07) Safety and Regulation in AI Development

(01:35:53) Internal Dynamics and Cultural Shifts

(01:58:18) Concluding Thoughts on AI Governance

(02:02:29) Outro


  continue reading

244 episodes

Artwork
iconShare
 
Manage episode 481456376 series 3452589
Content provided by Turpentine, Erik Torenberg, and Nathan Labenz. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Turpentine, Erik Torenberg, and Nathan Labenz or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In this episode, former OpenAI research scientist Steven Adler discusses his insights on OpenAI's transition through various phases, including its growth, internal culture shifts, and the contentious move from nonprofit to for-profit. The conversation delves into the initial days of OpenAI's development of GPT-3 and GPT-4, the cultural and ethical disagreements within the organization, and the recent amicus brief addressing the Elon versus OpenAI lawsuit. Steven Adler also explores the broader implications of AI capabilities, safety evaluations, and the critical need for transparent and responsible AI governance. The episode provides a candid look at the internal dynamics of a leading AI company and offers perspectives on the responsibilities and challenges faced by AI researchers and developers today.

Amicus brief to the Elon Musk versus OpenAI lawsuit: https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.152.0.pdf

Steven Adler's post on 'X' about Personhood credentials (a paper co-authored by him) : https://x.com/sjgadler/status/1824245211322568903

Steven Adler's substack post on "minimum testing period" for frontier AI : https://substack.com/@sjadler/p-161143327?utm_source=profile&utm_medium=reader2

Steven Adler's substack post on TSFT Model Testing: https://substack.com/@sjadler/p-159883282?utm_source=profile&utm_medium=reader2

Steven Adler's Substack: https://stevenadler.substack.com/

Upcoming Major AI Events Featuring Nathan Labenz as a Keynote Speaker

https://www.imagineai.live/

https://adapta.org/adapta-summit

https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/


PRODUCED BY:

https://aipodcast.ing


CHAPTERS:

(00:00) About the Episode

(05:15) Joining OpenAI: Early Days and Cultural Insights

(06:41) The Anthropic Split and Its Impact

(11:32) Product Safety and Content Policies at OpenAI (Part 1)

(19:21) Sponsors: ElevenLabs | Oracle Cloud Infrastructure (OCI)

(21:48) Product Safety and Content Policies at OpenAI (Part 2)

(22:08) The Launch and Impact of GPT-4

(32:15) Evaluating AI Models: Challenges and Best Practices (Part 1)

(33:46) Sponsors: Shopify | NetSuite

(37:10) Evaluating AI Models: Challenges and Best Practices (Part 2)

(55:58) AGI Readiness and Personhood Credentials

(01:05:03) Biometrics and Internet Friction

(01:06:52) Credential Security and Recovery

(01:08:05) Trust and Ecosystem Diversity

(01:09:40) AI Agents and Verification Challenges

(01:14:28) OpenAI's Evolution and Ambitions

(01:22:07) Safety and Regulation in AI Development

(01:35:53) Internal Dynamics and Cultural Shifts

(01:58:18) Concluding Thoughts on AI Governance

(02:02:29) Outro


  continue reading

244 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play