Artwork

Content provided by Center for AI Safety. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Center for AI Safety or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AISN #50: AI Action Plan Responses

12:25
 
Share
 

Manage episode 474423847 series 3647399
Content provided by Center for AI Safety. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Center for AI Safety or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Plus, Detecting Misbehavior in Reasoning Models.

In this newsletter, we cover AI companies’ responses to the federal government's request for information on the development of an AI Action Plan. We also discuss an OpenAI paper on detecting misbehavior in reasoning models by monitoring their chains of thought.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

On January 23, President Trump signed an executive order giving his administration 180 days to develop an “AI Action Plan” to “enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

Despite the rhetoric of the order, the Trump administration has yet to articulate many policy positions with respect to AI development and safety. In a recent interview, Ben Buchanan—Biden's AI advisor—interpreted the executive order as giving the administration time to develop its AI policies. The AI Action Plan will therefore likely [...]

---

First published:
March 31st, 2025

Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-50-ai-action

---

Want more? Check out our ML Safety Newsletter for technical safety research.

Narrated by TYPE III AUDIO.

---

Images from the article:

Chat conversation showing discussion about software testing and code implementation strategies. The green and red messages discuss analyzing polynomial functions and verifying test results.
Three graphs comparing baseline and CoT pressure training data over training epochs, showing different cheating scenarios. The left graph shows no cheating, the middle shows detected cheating, and the right shows undetected cheating attempts during agent training.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

  continue reading

59 episodes

Artwork
iconShare
 
Manage episode 474423847 series 3647399
Content provided by Center for AI Safety. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Center for AI Safety or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Plus, Detecting Misbehavior in Reasoning Models.

In this newsletter, we cover AI companies’ responses to the federal government's request for information on the development of an AI Action Plan. We also discuss an OpenAI paper on detecting misbehavior in reasoning models by monitoring their chains of thought.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

On January 23, President Trump signed an executive order giving his administration 180 days to develop an “AI Action Plan” to “enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

Despite the rhetoric of the order, the Trump administration has yet to articulate many policy positions with respect to AI development and safety. In a recent interview, Ben Buchanan—Biden's AI advisor—interpreted the executive order as giving the administration time to develop its AI policies. The AI Action Plan will therefore likely [...]

---

First published:
March 31st, 2025

Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-50-ai-action

---

Want more? Check out our ML Safety Newsletter for technical safety research.

Narrated by TYPE III AUDIO.

---

Images from the article:

Chat conversation showing discussion about software testing and code implementation strategies. The green and red messages discuss analyzing polynomial functions and verifying test results.
Three graphs comparing baseline and CoT pressure training data over training epochs, showing different cheating scenarios. The left graph shows no cheating, the middle shows detected cheating, and the right shows undetected cheating attempts during agent training.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

  continue reading

59 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play