Artwork

Content provided by The Lawfare Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Lawfare Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Scaling Laws: The State of AI Safety with Steven Adler

49:14
 
Share
 

Manage episode 505838744 series 56794
Content provided by The Lawfare Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Lawfare Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at Lawfare, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.

Thanks to Leo Wu for research assistance!

Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Support this show http://supporter.acast.com/lawfare.


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

2679 episodes

Artwork
iconShare
 
Manage episode 505838744 series 56794
Content provided by The Lawfare Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Lawfare Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at Lawfare, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.

Thanks to Leo Wu for research assistance!

Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Support this show http://supporter.acast.com/lawfare.


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

2679 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play