Go offline with the Player FM app!
E137: AI Safety vs Speed: Helen Toner Discusses OpenAI Board Experience, Regulatory Approaches, and Military AI [The Cognitive Revolution]
Manage episode 478741304 series 3461433
This week on Upstream, we’re releasing an episode of The Cognitive Revolution. Nathan Labenz interviews Helen Toner, director at CSET, about her experiences with OpenAI, the concept of adaptation buffers for AI integration, and AI's role in military decision-making. They discuss the implications of AI development, the need for regulatory policies, and the geopolitical dynamics involving AI competition with China.
—
📰 Be notified early when Turpentine's drops new publication: https://www.turpentine.co/exclusiveaccess
—
RECOMMENDED PODCASTS:
🎙️ The Cognitive Revolution
The Cognitive Revolution is a podcast about AI where hosts Nathan Labenz and Erik Torenberg interview the builders on the edge of AI and explore the dramatic shift it will unlock over the next decades.
Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk?si=7357ec31ac424043&nd=1&dlsi=060a53f1d7be47ad
—
SPONSORS:
☁️ Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds and offers one consistent price. Oracle is offering to cut your cloud bill in half. See if your company qualifies at https://oracle.com/turpentine
🕵️♂️ Take your personal data back with Incogni! Use code UPSTREAM at the link below and get 60% off an annual plan: https://incogni.com/upstream
💥 Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist.
—
LINKS:
Helen Toner's appearance on the TED AI show: https://www.ted.com/talks/the_ted_ai_show_what_really_went_down_at_openai_and_the_future_of_regulation_w_helen_toner
Helen Toner's substack : https://helentoner.substack.com/
Additional recommended reads:
https://helentoner.substack.com/p/nonproliferation-is-the-wrong-approach
https://cset.georgetown.edu/publication/ai-for-military-decision-making/
—
X / TWITTER:
@hlntnr
@labenz
@eriktorenberg
@turpentinemedia
—
HIGHLIGHTS FROM THE EPISODE:
- Helen Toner joined OpenAI's board in 2021, bringing AI policy expertise when AGI discussions were still uncommon.
- She confirms that rumors about QStar contributing to the board's decision to fire Sam Altman were completely false.
- Helen observes contradictions at OpenAI: safety-focused research papers alongside aggressive policy positions.
- For AI whistleblowers, she recommends clear disclosure standards rather than vague reporting guidelines.
- Helen introduced the concept of "adaptation buffers," noting that while frontier AI development gets more expensive, capabilities become cheaper to replicate once achieved.
- Rather than focusing on non-proliferation, Helen advocates using adaptation time to build societal resilience (like improving outbreak detection).
- She favors conditional slowdowns (based on risk mitigation) rather than arbitrary pauses or compute limits.
- For military AI applications, Helen's research identifies three key considerations: scope (how tightly bound the system is), data quality, and human-machine interaction design.
- Helen expresses skepticism about "AI war simulations," arguing military contexts have too many unknowns to be modeled like games.
- She suggests the shift in AI CEOs' rhetoric about China competition is "the path of least resistance" to argue against regulation.
- Helen acknowledges the difficulty of reaching stable international equilibrium around AI development with too many unknowns about what superintelligence would mean for political systems.
141 episodes
Manage episode 478741304 series 3461433
This week on Upstream, we’re releasing an episode of The Cognitive Revolution. Nathan Labenz interviews Helen Toner, director at CSET, about her experiences with OpenAI, the concept of adaptation buffers for AI integration, and AI's role in military decision-making. They discuss the implications of AI development, the need for regulatory policies, and the geopolitical dynamics involving AI competition with China.
—
📰 Be notified early when Turpentine's drops new publication: https://www.turpentine.co/exclusiveaccess
—
RECOMMENDED PODCASTS:
🎙️ The Cognitive Revolution
The Cognitive Revolution is a podcast about AI where hosts Nathan Labenz and Erik Torenberg interview the builders on the edge of AI and explore the dramatic shift it will unlock over the next decades.
Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk?si=7357ec31ac424043&nd=1&dlsi=060a53f1d7be47ad
—
SPONSORS:
☁️ Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds and offers one consistent price. Oracle is offering to cut your cloud bill in half. See if your company qualifies at https://oracle.com/turpentine
🕵️♂️ Take your personal data back with Incogni! Use code UPSTREAM at the link below and get 60% off an annual plan: https://incogni.com/upstream
💥 Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist.
—
LINKS:
Helen Toner's appearance on the TED AI show: https://www.ted.com/talks/the_ted_ai_show_what_really_went_down_at_openai_and_the_future_of_regulation_w_helen_toner
Helen Toner's substack : https://helentoner.substack.com/
Additional recommended reads:
https://helentoner.substack.com/p/nonproliferation-is-the-wrong-approach
https://cset.georgetown.edu/publication/ai-for-military-decision-making/
—
X / TWITTER:
@hlntnr
@labenz
@eriktorenberg
@turpentinemedia
—
HIGHLIGHTS FROM THE EPISODE:
- Helen Toner joined OpenAI's board in 2021, bringing AI policy expertise when AGI discussions were still uncommon.
- She confirms that rumors about QStar contributing to the board's decision to fire Sam Altman were completely false.
- Helen observes contradictions at OpenAI: safety-focused research papers alongside aggressive policy positions.
- For AI whistleblowers, she recommends clear disclosure standards rather than vague reporting guidelines.
- Helen introduced the concept of "adaptation buffers," noting that while frontier AI development gets more expensive, capabilities become cheaper to replicate once achieved.
- Rather than focusing on non-proliferation, Helen advocates using adaptation time to build societal resilience (like improving outbreak detection).
- She favors conditional slowdowns (based on risk mitigation) rather than arbitrary pauses or compute limits.
- For military AI applications, Helen's research identifies three key considerations: scope (how tightly bound the system is), data quality, and human-machine interaction design.
- Helen expresses skepticism about "AI war simulations," arguing military contexts have too many unknowns to be modeled like games.
- She suggests the shift in AI CEOs' rhetoric about China competition is "the path of least resistance" to argue against regulation.
- Helen acknowledges the difficulty of reaching stable international equilibrium around AI development with too many unknowns about what superintelligence would mean for political systems.
141 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.