Go offline with the Player FM app!
(331) Knowledge-based
Manage episode 493871431 series 1282303
On this episode, Matt, Lisa and serial entrepreneur Rufus Evison delve deep into the challenges and potential dangers of current Generative AI, particularly Large Language Models (LLMs).
Rufus argues that LLMs inherently lack three crucial tenets: they are not correctable (corrigible), transparent, or reliable. He asserts that LLMs are “none of the three” and have become “good enough to be dangerous”.
A core issue, according to Rufus, is that LLMs have no concept of truth. They operate probabilistically, predicting the “most likely word to be put next,” which often doesn’t align with factual accuracy. He vividly describes LLMs as capable of “lying plausibly” and even fabricating references when challenged.
Rufus contrasts this with knowledge representation-based systems, such as the original Amazon Alexa (developed by True Knowledge, where Rufus was company secretary). These systems build a “structured knowledge version of the universe” based on facts and logical deductions from axioms, similar to mathematics. He highlights their incredible efficiency, being “six orders of magnitude more efficient” than LLMs.
Looking to the future, Rufus proposes a hybrid approach where LLMs’ plausible “gut feel” outputs are then rigorously checked by a fact-checking mechanism based on knowledge representation, akin to human logical reasoning or “peer review”. This structure, he suggests, could mimic how humans often think: acting on instinct, then applying post-rationalization to verify.
Beyond the technical, Rufus raises profound philosophical concerns. He echoes Stephen Hawking’s warning about AI’s potential for a “convergent goal” leading to the removal of people. He also expresses worry that an AI-driven utopia, where money and work become unnecessary, could strip humanity of its intrinsic drivers: mastery, autonomy, and purpose. This would leave individuals without a reason to strive or engage, potentially leading to self-induced societal decline.
Despite these grave concerns, Rufus remains actively committed to his purpose: “trying to fix AI” to ensure a safer future for his children. He backs this ambition with an extraordinary track record: between 1996 and 2016, he was involved with 30 UK companies, achieving a remarkable 95% survival rate, starkly against the industry’s typical 95% failure rate for startups. His past ventures include contributing to Amazon Alexa, Global Diagnostics (now part of the NHS), and Cambridge Trishaws (the UK’s original bicycle taxis).
251 episodes
Manage episode 493871431 series 1282303
On this episode, Matt, Lisa and serial entrepreneur Rufus Evison delve deep into the challenges and potential dangers of current Generative AI, particularly Large Language Models (LLMs).
Rufus argues that LLMs inherently lack three crucial tenets: they are not correctable (corrigible), transparent, or reliable. He asserts that LLMs are “none of the three” and have become “good enough to be dangerous”.
A core issue, according to Rufus, is that LLMs have no concept of truth. They operate probabilistically, predicting the “most likely word to be put next,” which often doesn’t align with factual accuracy. He vividly describes LLMs as capable of “lying plausibly” and even fabricating references when challenged.
Rufus contrasts this with knowledge representation-based systems, such as the original Amazon Alexa (developed by True Knowledge, where Rufus was company secretary). These systems build a “structured knowledge version of the universe” based on facts and logical deductions from axioms, similar to mathematics. He highlights their incredible efficiency, being “six orders of magnitude more efficient” than LLMs.
Looking to the future, Rufus proposes a hybrid approach where LLMs’ plausible “gut feel” outputs are then rigorously checked by a fact-checking mechanism based on knowledge representation, akin to human logical reasoning or “peer review”. This structure, he suggests, could mimic how humans often think: acting on instinct, then applying post-rationalization to verify.
Beyond the technical, Rufus raises profound philosophical concerns. He echoes Stephen Hawking’s warning about AI’s potential for a “convergent goal” leading to the removal of people. He also expresses worry that an AI-driven utopia, where money and work become unnecessary, could strip humanity of its intrinsic drivers: mastery, autonomy, and purpose. This would leave individuals without a reason to strive or engage, potentially leading to self-induced societal decline.
Despite these grave concerns, Rufus remains actively committed to his purpose: “trying to fix AI” to ensure a safer future for his children. He backs this ambition with an extraordinary track record: between 1996 and 2016, he was involved with 30 UK companies, achieving a remarkable 95% survival rate, starkly against the industry’s typical 95% failure rate for startups. His past ventures include contributing to Amazon Alexa, Global Diagnostics (now part of the NHS), and Cambridge Trishaws (the UK’s original bicycle taxis).
251 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.