Artwork

Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

“Distillation Robustifies Unlearning” by Bruce W. Lee, Addie Foote, alexinf, leni, Jacob G-W, Harish Kamath, Bryce Woodworth, cloud, TurnTrout

17:19
 
Share
 

Manage episode 489209929 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Current “unlearning” methods only suppress capabilities instead of truly unlearning the capabilities. But if you distill an unlearned model into a randomly initialized model, the resulting network is actually robust to relearning. We show why this works, how well it works, and how to trade off compute for robustness.
Unlearn-and-Distill applies unlearning to a bad behavior and then distills the unlearned model into a new model. Distillation makes it way harder to retrain the new model to do the bad thing. Produced as part of the ML Alignment & Theory Scholars Program in the winter 2024–25 cohort of the shard theory stream.
Read our paper on ArXiv and enjoy an interactive demo.
Robust unlearning probably reduces AI risk
Maybe some future AI has long-term goals and humanity is in its way. Maybe future open-weight AIs have tons of bioterror expertise. If a system has dangerous knowledge, that system becomes [...]
---
Outline:
(01:01) Robust unlearning probably reduces AI risk
(02:42) Perfect data filtering is the current unlearning gold standard
(03:24) Oracle matching does not guarantee robust unlearning
(05:05) Distillation robustifies unlearning
(07:46) Trading unlearning robustness for compute
(09:49) UNDO is better than other unlearning methods
(11:19) Where this leaves us
(11:22) Limitations
(12:12) Insights and speculation
(15:00) Future directions
(15:35) Conclusion
(16:07) Acknowledgments
(16:50) Citation
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
June 13th, 2025
Source:
https://www.lesswrong.com/posts/anX4QrNjhJqGFvrBr/distillation-robustifies-unlearning
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Unlearn-and-Distill applies unlearning to a bad behavior and then distills the unlearned model into a new model. Distillation makes it way harder to retrain the new model to do the bad thing.
Matching oracle behavior doesn’t guarantee robust unlearning. Graph (a) shows the loss during distillation of the student (Reference) and the Student (Random). Graphs (b) and (c) show forget performance through retraining for Language and Arithmetic settings, respectively.
  continue reading

532 episodes

Artwork
iconShare
 
Manage episode 489209929 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Current “unlearning” methods only suppress capabilities instead of truly unlearning the capabilities. But if you distill an unlearned model into a randomly initialized model, the resulting network is actually robust to relearning. We show why this works, how well it works, and how to trade off compute for robustness.
Unlearn-and-Distill applies unlearning to a bad behavior and then distills the unlearned model into a new model. Distillation makes it way harder to retrain the new model to do the bad thing. Produced as part of the ML Alignment & Theory Scholars Program in the winter 2024–25 cohort of the shard theory stream.
Read our paper on ArXiv and enjoy an interactive demo.
Robust unlearning probably reduces AI risk
Maybe some future AI has long-term goals and humanity is in its way. Maybe future open-weight AIs have tons of bioterror expertise. If a system has dangerous knowledge, that system becomes [...]
---
Outline:
(01:01) Robust unlearning probably reduces AI risk
(02:42) Perfect data filtering is the current unlearning gold standard
(03:24) Oracle matching does not guarantee robust unlearning
(05:05) Distillation robustifies unlearning
(07:46) Trading unlearning robustness for compute
(09:49) UNDO is better than other unlearning methods
(11:19) Where this leaves us
(11:22) Limitations
(12:12) Insights and speculation
(15:00) Future directions
(15:35) Conclusion
(16:07) Acknowledgments
(16:50) Citation
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
June 13th, 2025
Source:
https://www.lesswrong.com/posts/anX4QrNjhJqGFvrBr/distillation-robustifies-unlearning
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Unlearn-and-Distill applies unlearning to a bad behavior and then distills the unlearned model into a new model. Distillation makes it way harder to retrain the new model to do the bad thing.
Matching oracle behavior doesn’t guarantee robust unlearning. Graph (a) shows the loss during distillation of the student (Reference) and the Student (Random). Graphs (b) and (c) show forget performance through retraining for Language and Arithmetic settings, respectively.
  continue reading

532 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play