Go offline with the Player FM app!
AI, Liability, and Hallucinations in a Changing Tech and Law Environment
Manage episode 482907544 series 2601021
Since ChatGPT came on the scene, numerous incidents have surfaced involving attorneys submitting court filings riddled with AI-generated hallucinations—plausible-sounding case citations that purport to support key legal propositions but are, in fact, entirely fictitious. As sanctions against attorneys mount, it seems clear there are a few kinks in the tech. Even AI tools designed specifically for lawyers can be prone to hallucinations.
In this episode, we look at the potential and risks of AI-assisted tech in law and policy with two Stanford Law researchers at the forefront of this issue: RegLab Director Professor Daniel Ho and JD/PhD student and computer science researcher Mirac Suzgun. Together with several co-authors, they examine the emerging risks in two recent papers, “Profiling Legal Hallucinations in Large Language Models” (Oxford Journal of Legal Analysis, 2024) and the forthcoming “Hallucination-Free?” in the Journal of Empirical Legal Studies. Ho and Suzgun offer new insights into how legal AI is working, where it’s failing, and what’s at stake.
Links:
- Daniel Ho >>> Stanford Law page
- Stanford Institute for Human-Centered Artificial Intelligence (HAI) >>> Stanford University page
- Regulation, Evaluation, and Governance Lab (RegLab) >>> Stanford University page
Connect:
- Episode Transcripts >>> Stanford Legal Podcast Website
- Stanford Legal Podcast >>> LinkedIn Page
- Rich Ford >>> Twitter/X
- Pam Karlan >>> Stanford Law School Page
- Stanford Law School >>> Twitter/X
- Stanford Lawyer Magazine >>> Twitter/X
(00:00:00) Introduction to AI in Legal Education
(00:05:01) AI Tools in Legal Research and Writing
(00:12:01) Challenges of AI-Generated Content
(00:20:0) Reinforcement Learning with Human Feedback
(00:30:01) Audience Q&A
164 episodes
Manage episode 482907544 series 2601021
Since ChatGPT came on the scene, numerous incidents have surfaced involving attorneys submitting court filings riddled with AI-generated hallucinations—plausible-sounding case citations that purport to support key legal propositions but are, in fact, entirely fictitious. As sanctions against attorneys mount, it seems clear there are a few kinks in the tech. Even AI tools designed specifically for lawyers can be prone to hallucinations.
In this episode, we look at the potential and risks of AI-assisted tech in law and policy with two Stanford Law researchers at the forefront of this issue: RegLab Director Professor Daniel Ho and JD/PhD student and computer science researcher Mirac Suzgun. Together with several co-authors, they examine the emerging risks in two recent papers, “Profiling Legal Hallucinations in Large Language Models” (Oxford Journal of Legal Analysis, 2024) and the forthcoming “Hallucination-Free?” in the Journal of Empirical Legal Studies. Ho and Suzgun offer new insights into how legal AI is working, where it’s failing, and what’s at stake.
Links:
- Daniel Ho >>> Stanford Law page
- Stanford Institute for Human-Centered Artificial Intelligence (HAI) >>> Stanford University page
- Regulation, Evaluation, and Governance Lab (RegLab) >>> Stanford University page
Connect:
- Episode Transcripts >>> Stanford Legal Podcast Website
- Stanford Legal Podcast >>> LinkedIn Page
- Rich Ford >>> Twitter/X
- Pam Karlan >>> Stanford Law School Page
- Stanford Law School >>> Twitter/X
- Stanford Lawyer Magazine >>> Twitter/X
(00:00:00) Introduction to AI in Legal Education
(00:05:01) AI Tools in Legal Research and Writing
(00:12:01) Challenges of AI-Generated Content
(00:20:0) Reinforcement Learning with Human Feedback
(00:30:01) Audience Q&A
164 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.