Artwork

Content provided by ITPro. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by ITPro or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Are reasoning models fundamentally flawed?

16:53
 
Share
 

Manage episode 489932294 series 2569287
Content provided by ITPro. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by ITPro or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

AI reasoning models have emerged in the past year as a beacon of hope for large language models (LLMs), with AI developers such as OpenAI, Google, and Anthropic selling them as the go-to solution for solving the most complex business problems.

However, a new research paper by Apple has cast significant doubts on the efficacy of reasoning models, going as far as to suggest that when a problem is too complex, they simply give up. What's going on here? And does it mean reasoning models are fundamentally flawed?

In this episode, Rory Bathgate speaks to ITPro's news and analysis editor Ross Kelly to explain some of the report's key findings and what it means for the future of AI development.

  continue reading

306 episodes

Artwork
iconShare
 
Manage episode 489932294 series 2569287
Content provided by ITPro. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by ITPro or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

AI reasoning models have emerged in the past year as a beacon of hope for large language models (LLMs), with AI developers such as OpenAI, Google, and Anthropic selling them as the go-to solution for solving the most complex business problems.

However, a new research paper by Apple has cast significant doubts on the efficacy of reasoning models, going as far as to suggest that when a problem is too complex, they simply give up. What's going on here? And does it mean reasoning models are fundamentally flawed?

In this episode, Rory Bathgate speaks to ITPro's news and analysis editor Ross Kelly to explain some of the report's key findings and what it means for the future of AI development.

  continue reading

306 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play