Artwork

Content provided by Matt Ballantine and Chris Weston. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matt Ballantine and Chris Weston or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

(331) Knowledge-based

44:59
 
Share
 

Manage episode 493871431 series 1282303
Content provided by Matt Ballantine and Chris Weston. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matt Ballantine and Chris Weston or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

On this episode, Matt, Lisa and serial entrepreneur Rufus Evison delve deep into the challenges and potential dangers of current Generative AI, particularly Large Language Models (LLMs).
Rufus argues that LLMs inherently lack three crucial tenets: they are not correctable (corrigible), transparent, or reliable. He asserts that LLMs are “none of the three” and have become “good enough to be dangerous”.

A core issue, according to Rufus, is that LLMs have no concept of truth. They operate probabilistically, predicting the “most likely word to be put next,” which often doesn’t align with factual accuracy. He vividly describes LLMs as capable of “lying plausibly” and even fabricating references when challenged.
Rufus contrasts this with knowledge representation-based systems, such as the original Amazon Alexa (developed by True Knowledge, where Rufus was company secretary). These systems build a “structured knowledge version of the universe” based on facts and logical deductions from axioms, similar to mathematics. He highlights their incredible efficiency, being “six orders of magnitude more efficient” than LLMs.
Looking to the future, Rufus proposes a hybrid approach where LLMs’ plausible “gut feel” outputs are then rigorously checked by a fact-checking mechanism based on knowledge representation, akin to human logical reasoning or “peer review”. This structure, he suggests, could mimic how humans often think: acting on instinct, then applying post-rationalization to verify.
Beyond the technical, Rufus raises profound philosophical concerns. He echoes Stephen Hawking’s warning about AI’s potential for a “convergent goal” leading to the removal of people. He also expresses worry that an AI-driven utopia, where money and work become unnecessary, could strip humanity of its intrinsic drivers: mastery, autonomy, and purpose. This would leave individuals without a reason to strive or engage, potentially leading to self-induced societal decline.
Despite these grave concerns, Rufus remains actively committed to his purpose: “trying to fix AI” to ensure a safer future for his children. He backs this ambition with an extraordinary track record: between 1996 and 2016, he was involved with 30 UK companies, achieving a remarkable 95% survival rate, starkly against the industry’s typical 95% failure rate for startups. His past ventures include contributing to Amazon Alexa, Global Diagnostics (now part of the NHS), and Cambridge Trishaws (the UK’s original bicycle taxis).

  continue reading

251 episodes

Artwork

(331) Knowledge-based

WB-40

published

iconShare
 
Manage episode 493871431 series 1282303
Content provided by Matt Ballantine and Chris Weston. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matt Ballantine and Chris Weston or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

On this episode, Matt, Lisa and serial entrepreneur Rufus Evison delve deep into the challenges and potential dangers of current Generative AI, particularly Large Language Models (LLMs).
Rufus argues that LLMs inherently lack three crucial tenets: they are not correctable (corrigible), transparent, or reliable. He asserts that LLMs are “none of the three” and have become “good enough to be dangerous”.

A core issue, according to Rufus, is that LLMs have no concept of truth. They operate probabilistically, predicting the “most likely word to be put next,” which often doesn’t align with factual accuracy. He vividly describes LLMs as capable of “lying plausibly” and even fabricating references when challenged.
Rufus contrasts this with knowledge representation-based systems, such as the original Amazon Alexa (developed by True Knowledge, where Rufus was company secretary). These systems build a “structured knowledge version of the universe” based on facts and logical deductions from axioms, similar to mathematics. He highlights their incredible efficiency, being “six orders of magnitude more efficient” than LLMs.
Looking to the future, Rufus proposes a hybrid approach where LLMs’ plausible “gut feel” outputs are then rigorously checked by a fact-checking mechanism based on knowledge representation, akin to human logical reasoning or “peer review”. This structure, he suggests, could mimic how humans often think: acting on instinct, then applying post-rationalization to verify.
Beyond the technical, Rufus raises profound philosophical concerns. He echoes Stephen Hawking’s warning about AI’s potential for a “convergent goal” leading to the removal of people. He also expresses worry that an AI-driven utopia, where money and work become unnecessary, could strip humanity of its intrinsic drivers: mastery, autonomy, and purpose. This would leave individuals without a reason to strive or engage, potentially leading to self-induced societal decline.
Despite these grave concerns, Rufus remains actively committed to his purpose: “trying to fix AI” to ensure a safer future for his children. He backs this ambition with an extraordinary track record: between 1996 and 2016, he was involved with 30 UK companies, achieving a remarkable 95% survival rate, starkly against the industry’s typical 95% failure rate for startups. His past ventures include contributing to Amazon Alexa, Global Diagnostics (now part of the NHS), and Cambridge Trishaws (the UK’s original bicycle taxis).

  continue reading

251 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play