Artwork

Content provided by aadilbouhlaoui. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by aadilbouhlaoui or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Beyond Benchmarks: Live AI Auditing to Combat Hate Speech and "Safety Debt"

8:21
 
Share
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on August 31, 2025 22:13 (22d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 503621614 series 3674189
Content provided by aadilbouhlaoui. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by aadilbouhlaoui or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Large Language Models (LLMs) are becoming integral to our digital infrastructure, yet their capacity to generate and perpetuate sophisticated hate speech remains a critical safety challenge. Current evaluation methods, which rely on static benchmarking, are increasingly insufficient to keep pace with the rapid evolution of these models. This paper argues that static, report-based auditing is an outdated paradigm. We propose a novel, dynamic auditing framework, exemplified by a system named AIBIA (AI Bias Analytics), which operates as a live, 24/7 monitor for harmful content. This framework utilises a collaborative approach, leveraging AI agents for scalable, real-time testing and evaluation (a model known as "LLM-as-Judge"), supervised and calibrated by periodic intervention from human experts (Human-in-the-Loop). We anchor our proposal in a case study focusing on the complex challenge of detecting Islamically-worded antisemitism. However, we demonstrate that the core workflow is model agnostic and can be adapted to counter any form of hate speech, creating a more resilient and responsive AI safety ecosystem.

  continue reading

23 episodes

Artwork
iconShare
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on August 31, 2025 22:13 (22d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 503621614 series 3674189
Content provided by aadilbouhlaoui. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by aadilbouhlaoui or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Large Language Models (LLMs) are becoming integral to our digital infrastructure, yet their capacity to generate and perpetuate sophisticated hate speech remains a critical safety challenge. Current evaluation methods, which rely on static benchmarking, are increasingly insufficient to keep pace with the rapid evolution of these models. This paper argues that static, report-based auditing is an outdated paradigm. We propose a novel, dynamic auditing framework, exemplified by a system named AIBIA (AI Bias Analytics), which operates as a live, 24/7 monitor for harmful content. This framework utilises a collaborative approach, leveraging AI agents for scalable, real-time testing and evaluation (a model known as "LLM-as-Judge"), supervised and calibrated by periodic intervention from human experts (Human-in-the-Loop). We anchor our proposal in a case study focusing on the complex challenge of detecting Islamically-worded antisemitism. However, we demonstrate that the core workflow is model agnostic and can be adapted to counter any form of hate speech, creating a more resilient and responsive AI safety ecosystem.

  continue reading

23 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play