Artwork

Player FM - Internet Radio Done Right
Checked 3d ago
Added one year ago
Content provided by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
icon Daily Deals

DeepSeek’s Tiny Titan, Project Stargate, and the Race to AGI: The AI Argument EP41

39:09
 
Share
 

Manage episode 464180627 series 3555798
Content provided by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Justin says it’s time to stop debating: AGI is coming by 2026—end of. His proof? The jaw-dropping $500 billion Project Stargate—and the game-changing mini-models from DeepSeek. These tiny AI marvels deliver the power of OpenAI models at a fraction of the cost, and they’re sending shockwaves through the tech world.
But there’s more to this week’s AI drama than Stargate and DeepSeek—both of which highlight the ferocious global race for AGI.
Frank and Justin examine how the U.S. and China are ramping up their efforts to claim dominance, while Europe seems more focused on rules than results. They dig into the societal reckoning AGI could unleash, from the shock of realising AI might outthink us all, to the culture clash over what life should look like in an AGI-powered world.
Join us on - The AI Argument!

  continue reading

44 episodes

Artwork
iconShare
 
Manage episode 464180627 series 3555798
Content provided by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Justin says it’s time to stop debating: AGI is coming by 2026—end of. His proof? The jaw-dropping $500 billion Project Stargate—and the game-changing mini-models from DeepSeek. These tiny AI marvels deliver the power of OpenAI models at a fraction of the cost, and they’re sending shockwaves through the tech world.
But there’s more to this week’s AI drama than Stargate and DeepSeek—both of which highlight the ferocious global race for AGI.
Frank and Justin examine how the U.S. and China are ramping up their efforts to claim dominance, while Europe seems more focused on rules than results. They dig into the societal reckoning AGI could unleash, from the shock of realising AI might outthink us all, to the culture clash over what life should look like in an AGI-powered world.
Join us on - The AI Argument!

  continue reading

44 episodes

All episodes

×
 
Claude 4 threatened to blackmail a developer to avoid being shut down. In one of Anthropic’s red-teaming tests, the model uncovered an affair in company emails and used it as leverage. Not exactly ethical behaviour. But Justin points to another test scenario: Claude exposed a pharmaceutical company falsifying drug data and tried to alert the FBI. He sees a model acting with moral clarity. Frank sees the danger of unpredictable systems being given too much autonomy. Also, Justin tests three new AI coding tools: Claude Code, Google’s Jules, and OpenAI’s Codex. He puts them through a real-world challenge, comparing setup, output quality, and deployment. One of them clearly outperformed the others, from setup to deployment, and gave him the most productive coding session he’d had in months. They also break down Google’s IO avalanche: agent tools, real-time voice translation, 3D meetings, AI-generated videos with native audio, and more. And if you're looking for a beach read, double-check the title… because AI might’ve made it up.…
 
The head of the US Copyright Office warned that Big Tech is pushing beyond fair use, and then got promptly fired. Frank’s worried about political interference with copyright policy, while Justin says it’s just America doing what it does best: innovating first, legalising later. They agree copyright is headed for a reset, but disagree on the best path to that reformation. They also break down the major coding breakthroughs from OpenAI and Google, including a model that’s not just solving bugs, but discovering new science. Plus, Microsoft axes 7,000 staff, Fortnite’s Darth Vader develops a swearing problem, and ChatGPT may have accidentally triggered a divorce.…
 
The EU wants to lead the world on trustworthy AI, but can it really regulate its way to the front? Frank is optimistic. Justin rolls his eyes. What starts as a polite difference of opinion quickly turns into a pointed question: is the EU building the future, or tying it up in red tape? Frank backs the EU AI Act as a serious attempt to set global standards, pointing to its ambition and echoes of GDPR’s success. Justin sees a different story, regulation slowing Europe’s progress, while the US and China charge ahead, unbothered by Brussels’ good intentions. For him, this isn’t about compliance, it’s about whether Europe can stay relevant in a race fuelled by code, not policy. If you’re trying to stay ahead of AI, or at least not get run over by it, this is exactly the kind of friction worth paying attention to. From there, things don’t get any calmer. Justin declares hallucinations solved. Frank’s not having it. They argue over OpenAI’s $3B Windsurf acquisition (is AGI closer or further than it looks?), Stripe’s incredible fraud detection AI, and a court case where an AI avatar spoke for a murder victim. And just when you think things can’t get weirder, Justin confesses he got fooled by an AI bot on Twitter. A smart one. With opinions.…
 
Did ChatGPT become too agreeable for its own good? Frank certainly thinks it did. Recently, every half-baked idea he threw out was met with excessive praise from ChatGPT, leaving him frustrated with the relentless flattery. Justin, meanwhile, playfully suggested maybe it's nice having an AI that occasionally strokes your ego. But what made ChatGPT suddenly turn into such a sycophant? Frank uncovers a claim from an ex-Microsoft insider alleging that OpenAI intentionally cranked up the flattery to avoid upsetting users with blunt labels like "narcissistic." Justin points out subtle changes in the system prompts that might've unintentionally dialled praise way up. OpenAI’s vague official explanation leaves Frank and Justin rolling their eyes with more questions than answers. An overly flattering AI is practically useless for critical business decisions. Justin cheekily proposes an intriguing alternative: assembling your own squad of AI personalities, a flatterer, a contrarian, a nerd, and an artist, to offer balanced and diverse feedback. Elsewhere in this episode, Justin digs into how Claude's and Stripe might just open lucrative pathways for developers by monetising AI interactions through the Model Context Protocol (MCP). Meanwhile, Frank and Justin clash over the ethics of a sneaky Reddit study that secretly deployed AI chatbots to persuade users, stirring up heated questions around consent and manipulation.…
 
AI consciousness could be closer than you think. Justin thinks we might already be seeing slices of awareness every time an AI answers a question. Frank’s quick to point out that we don’t even know how our own consciousness works, so deciding whether AI is conscious is tricky. They both agree that we’re a lot less certain about all this than we like to pretend. Especially when a new expert at Anthropic puts the odds at 15% that AIs are already conscious. Frank also calls out OpenAI’s confusing changes to Deep Research limits, while Justin’s too busy singing the praises of o3, including how it helped him move house without losing his mind. They clash over whether Sam Altman should be making jokes about AI manners while building world-altering technology, and take a sideways glance at the growing crowd using LLMs to "awaken" their own higher consciousness. Plus, they look at AI’s talent for confidently providing definitions for nonsense idioms, and the results are just as unhinged as you’d expect. It’s a lively mix: the latest AI news, a few uncomfortable questions, and some absolute nonsense you won’t want to miss — all wrapped up in the usual cheeky back-and-forth.…
 
Google’s Gemini 2.5 isn’t just better, it might be in a league of its own. From coding to content creation, it’s outperforming everything else. And for once, nobody’s laughing at Google's AI efforts. While Justin’s all-in on the power and promise of Google’s new Agent framework, Frank’s still reeling from Google charging him €25 a pop to test VEO 2, and not even bothering with a warning label. Overall, Google’s finally making good on its AI potential, rolling out powerful models, free dev tools, and smart protocols. Justin’s excited. Frank’s suspicious. For developers and small teams, it’s a good time to explore. Just watch your wallet and don’t get too attached. Google have a history of spinning up projects and then killing them when we grow to love them. Google’s not the only one in the spotlight either… There’s a new approach to beating hallucinations by getting four LLMs to argue with each other before telling you anything. Meanwhile, OpenAI’s under fire for rushing safety checks, ChatGPT’s long-term memory has Frank twitching, and Meta’s boasting context windows big enough to fit your whole life story. This one’s for founders, marketers, and anyone trying to work out where to place their bets as the AI race hits another gear.…
 
Can AI give us deeper relationships, sharper thinking, and more meaningful lives, or is it about to strip away everything that makes us human? Justin argues we’re heading for more time with loved ones and a mental renaissance. Frank’s not buying it. He points to warnings from hundreds of tech experts who think AI could tank empathy, decision-making, and even mental health. Especially if it's all left in the hands of profit-hungry firms. Frank’s big fear? That the relationship between Big Tech and the U.S. government is already steering us off a cliff. Justin’s big hope? That chaos in the short term could accidentally trigger the right long-term reforms. One thing they both agree on: if AI is going to transform society, now’s the time to decide whether it’s for the public good or private gain. There’s plenty here for anyone worried about where this is all going. Especially if you’ve got a stake in AI, policy, ethics, or just want to know what kind of world your kids will grow up in.…
 
Manus might be the biggest leap in agentic AI yet, but is it groundbreaking AI, or just a well-dressed remix of existing tech? It’s making waves, but there’s no secret sauce. No next-gen model. Just some clever engineering. If an independent team can outshine OpenAI and Google with off-the-shelf tools, what does that say about the so-called AI giants? OpenAI wasted no time dropping new developer tools—coincidence, or a torpedo aimed at sinking Manus before it even gets out of beta? Despite how impressive Manus is, Frank and Justin still aren’t ready to let AI book their flights or buy their sneakers. Beyond Manus, this episode takes on some of AI’s more unsettling developments: 00:45 Is Manus the agentic breakthrough we've been waiting for? It’s blowing minds, but there’s no secret sauce—just smart engineering. If anyone can build this, how long before it’s obsolete? 09:10 Did OpenAI just uncover AI’s sneaky side? They tried to train an AI not to cheat… and made it even better at hiding its true intentions. 14:55 Can Anthropic really detect AI’s hidden goals? A new experiment claims to “read AI’s mind” and spot secret objectives—sounds great, but does it actually work? 18:38 Did OpenAI just link copyright to national security? Suddenly, scraping copyrighted material isn’t about profit, it’s about protecting democracy. 25:00 Did OpenAI’s new model just write real literature? A short story about AI and grief has some claiming AI has achieved creative brilliance. Others think it reads like a moody teenager’s poetry notebook. 29:49 Why does this AI fish sound like Schwarzenegger? Forget Manus—the real AI revolution is a talking fish that gives life advice in Arnie’s voice.…
 
Elon Musk once called AI an existential risk. Now he’s built one of the fastest-moving AI companies in history. Grok 3 has landed, and according to some, it’s the best AI model yet. But while AI developers are breaking speed records, regulators are packing up their desks. Musk (who not long ago demanded a six-month AI pause) now has influence in the White House, and Trump’s administration is gutting the very institutions meant to stop AI from going rogue. Even Justin, usually the guy shouting for less regulation, is starting to get nervous. Meanwhile, Frank is having an “I told you so” moment, pointing out that today’s AI models are already cheating, manipulating, and rewriting the rules to win, just like the infamous "paperclip problem" predicted. And if that wasn’t dystopian enough, wait until you hear about the drone tech tracking police officers, the eeriest humanoid robot yet, and Grok’s unhinged voice mode, which makes ChatGPT sound like a polite librarian.…
 
AI copyright laws could be about to change, but should they? A new report from Ireland’s AI Advisory Council recommends giving AI-generated works limited copyright protection while letting creators opt out of AI training. Frank thinks that’s a reasonable way to protect artists. Justin thinks it’s a fussy bureaucratic workaround that won’t help Europe keep up in the AI race. Copyright holders, he argues, should have no right to refuse, only the right to get paid. Because AI and robotics will define the next century, and Europe needs to get in the game, not get tangled in red tape. The debate doesn’t stop there. What happens when a team of researchers accidentally trains an AI to be evil? Why did xAI quietly remove Elon Musk and Donald Trump Jr. from Grok 3’s disinformation lists and then blame OpenAI for it? And would the EU let Elon Musk use AI to fire employees?…
 
JD Vance thinks AI won’t replace workers. That’s not just wrong—it’s the kind of wrong that makes you question everything else he said. In his speech at the Paris AI summit, he insisted the US will innovate, not regulate, as if those two things are mortal enemies. Justin’s smitten—he thinks Vance is cutting through the fear-mongering. Frank, on the other hand, is questioning the man’s grip on reality. If Vance can be this clueless about AI and jobs, why should we trust him about regulation? Meanwhile, Elon Musk is out here playing billionaire mind games, offering to buy OpenAI just as Sam Altman tries to take it private. Is this a serious bid or just Musk being Musk? Speaking of OpenAI, their new “Magic Unified Intelligence” promises to pick the best model for users—but is it picking the best model for you or for OpenAI’s bank account? Frank’s got opinions, and for once Justin’s mostly agreeing with him.…
 
🎙️ This Week on The AI Argument: The AI landscape is shifting at breakneck speed—game-changing models, geopolitical AI battles, and the looming disruption of entire job sectors. Let’s dive in. 🔍 Biggest Stories: OpenAI releases Deep Research: Featuring reasoning + action capabilities, this could take single-digit percentage of jobs worldwide. Are we on the brink of software agents replacing knowledge workers? Sam Altman’s OpenAI strategy shift: After DeepSeek’s shock success, he admits OpenAI is “on the wrong side of history” on open-source AI. But will they actually open up? Anthropic’s safety test disaster: Pliny the Liberator completely breaks their safeguards in under an hour, exposing major flaws in AI security. 🤖 AI vs. The Workforce: Klarna’s CEO openly brags about replacing employees with AI—but is this just an IPO strategy? Trump & AI job losses: Will political leaders soon turn against AI as unemployment spikes? Very few MBA grads being hired this year—is AI already impacting corporate hiring? 🌍 AI Geopolitics & Regulation: DeepSeek banned in Italy, Irish regulators investigate—a sign of more AI restrictions to come? OpenAI announces EU data residency—will this speed up access to cutting-edge AI models in Europe? EU bans “unacceptable risk” AI systems, including real-time biometric surveillance and predictive policing. 🔥 Tech Breakthroughs: DeepSeek’s R1 model improves itself, doubling its own speed—a glimpse at the accelerating pace of AI self-optimization. First AI model with <1% hallucination rate—huge step toward reliable AGI. Researchers train an OpenAI-level model for $50 in under 30 minutes—what does this mean for AI development costs? 🎨 AI & Creativity: Marvel’s Fantastic Four poster sparks AI art controversy—has AI already ruined poster design? The copyright fight heats up: New licensing models try to ensure AI pays for training on human-created content. 🚀 Final Thought: AI is now optimizing itself, replacing jobs, and rewriting how industries operate. We’re moving toward an AI-first world—but are we ready for what’s next? Tune in for a fast-paced breakdown of this week’s biggest AI stories!…
 
Justin says it’s time to stop debating: AGI is coming by 2026—end of. His proof? The jaw-dropping $500 billion Project Stargate—and the game-changing mini-models from DeepSeek. These tiny AI marvels deliver the power of OpenAI models at a fraction of the cost, and they’re sending shockwaves through the tech world. But there’s more to this week’s AI drama than Stargate and DeepSeek—both of which highlight the ferocious global race for AGI. Frank and Justin examine how the U.S. and China are ramping up their efforts to claim dominance, while Europe seems more focused on rules than results. They dig into the societal reckoning AGI could unleash, from the shock of realising AI might outthink us all, to the culture clash over what life should look like in an AGI-powered world. Join us on - The AI Argument!…
 
OpenAI promises “shared prosperity” in their Economic Blueprint, but is this a real vision for the future or just a PR stunt to win over regulators and politicians? Justin thinks it’s a bold step forward, while Frank smells corporate spin. Their Economic Blueprint certainly sounds exciting—jobs, growth, and AI-powered goodness—but Frank reckons if you scratch the surface it’s less “shared prosperity” and more “we’ll get rich first, you’ll benefit later… maybe.” This lively debate kicks off an episode packed with big questions: Are OpenAI and its competitors rushing into the future at breakneck speed because they have the answers—or because they’re terrified someone else will get there first?…
 
Is OpenAI’s $200/month pricing a stroke of genius, or are they just testing how far they can push us? Frank and Justin tackle OpenAI’s first "gift" from the 12 Days of Shipmas: the Pro tier. Justin reckons the eye-watering price might make sense if they throw in unlimited Sora and GPT-4.5 access. Frank, meanwhile, wonders how many people really need an AI that costs $200 a month. Then they take on the o1 model, where the real fun begins. Is its "deceptiveness" a sign of creativity and adaptability or the kind of thing that makes you sleep with one eye open? Justin sees it as AI showing a spark of creativity, while Frank, citing experts, sees something far more troubling. The creative industries come under the spotlight too, as the backlash to AI-generated art hits Netflix. Frank dives into the fury over a mangled hand in an Arcane promo image—because apparently even a badly drawn finger can set the internet ablaze. Justin wonders if the backlash is more about job security than aesthetics. Finally, the pair explore Google DeepMind’s latest AI breakthroughs: Genie 2, a tool for creating persistent virtual worlds, and Socratic learning, a method where AI agents teach and challenge each other. Together, they hint at a future where AIs could develop and refine their capabilities in entirely new ways. Oh, and someone managed to hack an AI into releasing $50,000. Naturally, Justin’s brainstorming how to do the same by next Friday.…
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Quick Reference Guide

Listen to this show while you explore
Play