Artwork

Content provided by Lawfare and University of Texas Law School. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Lawfare and University of Texas Law School or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The State of AI Safety with Steven Adler

47:23
 
Share
 

Manage episode 505302169 series 3347538
Content provided by Lawfare and University of Texas Law School. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Lawfare and University of Texas Law School or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.

You can read Steven’s Substack here: https://stevenadler.substack.com/

Thanks to Leo Wu for research assistance!


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

173 episodes

Artwork

The State of AI Safety with Steven Adler

Scaling Laws

14 subscribers

published

iconShare
 
Manage episode 505302169 series 3347538
Content provided by Lawfare and University of Texas Law School. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Lawfare and University of Texas Law School or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.

You can read Steven’s Substack here: https://stevenadler.substack.com/

Thanks to Leo Wu for research assistance!


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

173 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play