Yogendra Miraje public
[search 0]
More
Download the App!
show episodes
 
Artwork

1
AI Blindspot

Yogendra Miraje

icon
Unsubscribe
icon
icon
Unsubscribe
icon
Monthly
 
AI blindspot is a podcast that explores the uncharted territories of AI by focusing on its cutting-edge research and frontiers This podcast is for researchers, developers, curious minds, and anyone fascinated by the quest to close the gap between human intelligence and machines. As AI is advancing at Godspeed, it has become increasingly difficult to keep up with the progress. This is a human-in-loop AI-hosted podcast.
  continue reading
 
Loading …
show series
 
This episode covers AIE World's fair Recap of Day 2 focusing on Keynotes & SWE Agents. Thinking in Gemini allows models to iteratively "think" for smarter, more dynamic responses with variable compute. Gemini 2.5 pro Deep Think is a new high-budget thinking mode for extremely challenging problems, using deeper thought chains. Evals are critical forโ€ฆ
  continue reading
 
This episode covers the AI Engineer World's Fair 2025, the largest and most impactful edition yet. With over 3,000 attendees and 250+ speakers from around the globe, the event brought together leading voices in AI to explore the future of agentic workflows, model development, and human-AI collaboration. https://www.ai.engineer/ https://www.youtube.โ€ฆ
  continue reading
 
Aentic workflows are processes where AI agents dynamically plan, execute, and reflect on steps to achieve a goal, differentiating them from static, predefined workflows. Augmented LLMs, which serve as a base building block, are enhanced with capabilities like tool use and memory, enabling the creation of these more complex agents. This episode alsoโ€ฆ
  continue reading
 
In this episode, we discuss strategies for building effective AI agents, emphasizing simplicity and composable patterns over complex frameworks. It distinguishes between workflows, which use predefined code paths, and agents, where LLMs dynamically direct their own processes, noting that simpler solutions are often sufficient. To build effective AIโ€ฆ
  continue reading
 
DeepSeek-V3, is a open-weights large language model. DeepSeek-V3's key features include its remarkably low development cost, achieved through innovative techniques like inference-time computing and an auxiliary-loss-free load balancing strategy. The model's architecture utilizes Mixture-of-Experts (MoE) and Multi-head Latent Attention (MLA) for effโ€ฆ
  continue reading
 
In today's episode, we are discussing two research papers describing the two distinct approaches to building multi-agent collaboration : MetaGPT is a meta-programming framework using SOPs and defined roles for software development. https://arxiv.org/pdf/2308.00352 AutoGen uses customizable, conversable agents interacting via natural language or codโ€ฆ
  continue reading
 
This episode discusses agentic design pattern Tool Use. Tool use is essential for enhancing the capabilities of LLMs and allowing them to interact effectively with the real world. We discuss following papers. Gorilla: Large Language Model Connected withMassive APIs https://arxiv.org/pdf/2305.15334 MM-REACT : Prompting ChatGPT for Multimodal Reasoniโ€ฆ
  continue reading
 
This episode discussed AI agentic design pattern "Reflection"๐Ÿ“ ๐—ฆ๐—˜๐—Ÿ๐—™-๐—ฅ๐—˜๐—™๐—œ๐—ก๐—˜ SELF-REFINE is an approach where the LLM generates an initial output, then iteratively reviews and refines it, providing feedback on its own work until the output reaches a desired quality. This self-loop allows the LLM to act as both the creator and critic, enhancing its ouโ€ฆ
  continue reading
 
In this episode, we discuss following agent architectures:ReAct (Reason + Act): A method that alternates reasoning and actions, creating a powerful feedback loop for decision-making.Plan and Execute: Breaks down tasks into smaller steps before executing them sequentially, improving reasoning accuracy and efficiency. However, it may face higher lateโ€ฆ
  continue reading
 
๐Ÿค– AI Agents Uncovered! ๐Ÿค–In our latest episode, we're diving deep into the fascinating world of AI agents, focusing specifically on agents powered by Large Language Models (LLMs). These agents are shaping how AI systems can perceive, decide, and act โ€“ bringing us closer to the vision of highly adaptable, intelligent assistants.Key HighlightsAI agentโ€ฆ
  continue reading
 
Dario Amodei's essay, "Machines of Loving Grace," envisions the upside of AI if everything goes right. Could we be on the verge of an AI utopia where technology radically improves the world? Let's find out! ๐ŸŒโœจ๐—ช๐—ต๐˜† ๐—ฑ๐—ถ๐˜€๐—ฐ๐˜‚๐˜€๐˜€ ๐—”๐—œ ๐—จ๐˜๐—ผ๐—ฝ๐—ถ๐—ฎ?While many discussions around AI focus on risks, it's equally important to highlight its positive potential. The goal iโ€ฆ
  continue reading
 
๐Ÿ’ก ๐—ก๐—ผ๐—ฏ๐—ฒ๐—น ๐—ฃ๐—ฟ๐—ถ๐˜‡๐—ฒ๐˜€ - ๐—”๐—œ ๐—›๐˜†๐—ฝ๐—ฒ ๐—ผ๐—ฟ ๐—ด๐—น๐—ถ๐—บ๐—ฝ๐˜€๐—ฒ ๐—ถ๐—ป๐˜๐—ผ ๐—ฆ๐—ถ๐—ป๐—ด๐˜‚๐—น๐—ฎ๐—ฟ๐—ถ๐˜๐˜†? ๐Ÿ’กOne of the biggest moments from this year's Nobel announcements was AI's double win!๐—ก๐—ผ๐—ฏ๐—ฒ๐—น ๐—ถ๐—ป ๐—ฃ๐—ต๐˜†๐˜€๐—ถ๐—ฐ๐˜€Geoffrey Hinton and John Hopfield: Awarded for their pioneering work on neural networks, integrating physics principles like energy-based models and statistical physics into machine learning.๐—ก๐—ผ๐—ฏ๐—ฒโ€ฆ
  continue reading
 
This episode covers Open AI Dev Day Updates and a 280-page research paper on o1 evaluation model. Realtime API: Build fast speech-to-speech experiences in applications.Vision Fine-Tuning: Fine-tune GPT-4 with images and text to enhance vision capabilities.Prompt Caching: Receive automatic discounts on inputs recently seen by the model.Distillation:โ€ฆ
  continue reading
 
Large language models (LLMs) excel at various tasks due to their vast training datasets, but their knowledge can be static and lack domain-specific nuance. Researchers have explored methods like fine-tuning and retrieval-augmented generation (RAG) to address these limitations. Fine-tuning involves adjusting a pre-trained model on a narrower datasetโ€ฆ
  continue reading
 
This episode explores how Google researchers are tackling the issue of "hallucinations" in Large Language Models (LLMs) by connecting them to Data Commons, a vast repository of publicly available statistical data.https://datacommons.org/The researchers experiment with two techniques: Retrieval Interleaved Generation (RIG), where the LLM is trained โ€ฆ
  continue reading
 
Loading …

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play