Artwork

Player FM - Internet Radio Done Right
Checked 14d ago
Added thirty-five weeks ago
Content provided by DataStax and Charna Parkey. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by DataStax and Charna Parkey or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
icon Daily Deals

How We Should Think About Data Reliability for Our LLMs with Mona Rakibe

38:17
 
Share
 

Manage episode 443179563 series 3604986
Content provided by DataStax and Charna Parkey. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by DataStax and Charna Parkey or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This episode features an interview with Mona Rakibe, CEO and Co-founder of Telmai, an AI-based data observability platform built for open architecture. Mona is a veteran in the data infrastructure space and has held engineering and product leadership positions that drove product innovation and growth strategies for startups and enterprises. She has served companies like Reltio, EMC, Oracle, and BEA where AI-driven solutions have played a pivotal role.

In this episode, Sam sits down with Mona to discuss the application of LLMs, cleaning up data pipelines, and how we should think about data reliability.

-------------------

“When this push of large language model generative AI came in, the discussions shifted a little bit. People are more keen on, ‘How do I control the noise level in my data, in-stream, so that my model training is proper or is not very expensive, we have better precision?’ We had to shift a little bit that, ‘Can we separate this data in-stream for our users?’ Like good data, suspicious data, so they train it on little bit pre-processed data and they can optimize their costs. There's a lot that has changed from even people, their education level, but use cases also just within the last three years. Can we, as a tool, let users have some control and what they define as quality data reliability, and then monitor on those metrics was some of the things that we have done. That's how we think of data reliability. Full pipeline from ingestion to consumption, ability to have some human’s input in the system.” – Mona Rakibe

-------------------

Episode Timestamps:

(01:04): The journey of Telmai

(05:30): How we should think about data reliability, quality, and observability

(13:37): What open source data means to Mona

(15:34): How Mona guides people on cleaning up their data pipelines

(26:08): LLMs in real life

(30:37): A question Mona wishes to be asked

(33:22): Mona’s advice for the audience

(36:02): Backstage takeaways with executive producer, Audra Montenegro

-------------------

Links:

LinkedIn - Connect with Mona

Learn more about Telmai

  continue reading

97 episodes

Artwork
iconShare
 
Manage episode 443179563 series 3604986
Content provided by DataStax and Charna Parkey. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by DataStax and Charna Parkey or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

This episode features an interview with Mona Rakibe, CEO and Co-founder of Telmai, an AI-based data observability platform built for open architecture. Mona is a veteran in the data infrastructure space and has held engineering and product leadership positions that drove product innovation and growth strategies for startups and enterprises. She has served companies like Reltio, EMC, Oracle, and BEA where AI-driven solutions have played a pivotal role.

In this episode, Sam sits down with Mona to discuss the application of LLMs, cleaning up data pipelines, and how we should think about data reliability.

-------------------

“When this push of large language model generative AI came in, the discussions shifted a little bit. People are more keen on, ‘How do I control the noise level in my data, in-stream, so that my model training is proper or is not very expensive, we have better precision?’ We had to shift a little bit that, ‘Can we separate this data in-stream for our users?’ Like good data, suspicious data, so they train it on little bit pre-processed data and they can optimize their costs. There's a lot that has changed from even people, their education level, but use cases also just within the last three years. Can we, as a tool, let users have some control and what they define as quality data reliability, and then monitor on those metrics was some of the things that we have done. That's how we think of data reliability. Full pipeline from ingestion to consumption, ability to have some human’s input in the system.” – Mona Rakibe

-------------------

Episode Timestamps:

(01:04): The journey of Telmai

(05:30): How we should think about data reliability, quality, and observability

(13:37): What open source data means to Mona

(15:34): How Mona guides people on cleaning up their data pipelines

(26:08): LLMs in real life

(30:37): A question Mona wishes to be asked

(33:22): Mona’s advice for the audience

(36:02): Backstage takeaways with executive producer, Audra Montenegro

-------------------

Links:

LinkedIn - Connect with Mona

Learn more about Telmai

  continue reading

97 episodes

All episodes

×
 
Discover how Rackspace Spot is democratizing cloud infrastructure with an open-market, transparent option for cloud servers. Kevin Carter, Product Director at Rackspace Technology, discusses Rackspace Spot's hypothesis and the impact of an open marketplace for cloud resources. Discover how this novel approach is transforming the industry. TIMESTAMPS [00:00:00] – Introduction & Kevin Carter’s Background [00:02:00] – Journey to Rackspace and Open Source [00:04:00] – Engineering Culture and Pushing Boundaries [00:06:00] – Rackspace Spot and Market-Based Compute [00:08:00] – Cognitive vs. Technical Barriers in Cloud Adoption [00:10:00] – Tying Spot to OpenStack and Resource Scheduling [00:12:00] – Product Roadmap and Expansion of Spot [00:16:00] – Hardware Constraints and Power Consumption [00:18:00] – Scrappy Startups and Emerging Hardware Solutions [00:20:00] – Programming Languages for Accelerators (e.g., Mojo) [00:22:00] – Evolving Role of Software Engineers [00:24:00] – Importance of Collaboration and Communication [00:28:00] – Building Personal Networks Through Open Source [00:30:00] – The Power of Asking and Offering Help [00:34:00] – A Question No One Asks: Mentors [00:38:00] – The Power of Educators and Mentorship [00:40:00] – Rackspace’s OpenStack and Spot Ecosystem Strategy [00:42:00] – Open Source Communities to Join [00:44:00] – Simplifying Complex Systems [00:46:00] – Getting Started with Rackspace Spot and GitHub [00:48:00] – Human Skills in the Age of GenAI - Post Interview Conversation [00:54:00] – Processing Feedback with Emotional Intelligence [00:56:00] – Encouraging Inclusive and Clear Collaboration QUOTES CHARNA PARKEY “If you can’t engage with this infrastructure in a way that’s going to help you, then I guarantee you it’s not up to par for the direction that we’re going. [...] This democratization — if you don’t know how to use it — it’s not doing its job.” KEVIN CARTER “Those scrappy startups are going to be the ones that solve it. They’re going to figure out new and interesting ways to leverage instructions. [...] You’re going to see a push from them into the hardware manufacturers to enhance workloads on FPGAs, leveraging AVX 512 instruction sets that are historically on CPU silicon, not on a GPU.”…
 
In this episode of Open Source Data, Charna Parkey interviews Pete Pachal, founder of The Media Copilot. With over two decades of experience covering technology, Pete shares his insights on how AI is transforming media, journalism and discusses how journalists can embrace AI as a tool to enhance their work to adapt and thrive in this new environment. QUOTES PETE PACHAL: AI is something that you control. I know, it feels like it's a wave that's coming over that it's unstoppable, inevitable. And that's true to a large extent. But at the same time, it's not, there's no there, right? There's no spark, there's no intent. (...) Never relinquish your role as the ultimate creator and person responsible for what's coming out of this thing. CHARNA PARKEY: I think that there was a point where I found myself shifting more away from media and towards individual curated newsletters because like subject matter experts in that area, I could be like maybe they're going to summarize it incorrectly, et cetera. But at least I know my theory of mind of that individual. And then when I expand that to media, I don't know who's writing what and who's shadow writing what for who. TIMESTAMPS 00:00:00 - Introduction of Pete Pachal and his background in journalism and AI. 00:02:00 - Pete’s career journey, including his work at CoinDesk and founding The Media Copilot. 00:04:00 - AI training for media professionals (journalists, PR, marketers). 00:06:00 - Evolution of AI in journalism: From skepticism to ethical frameworks. 00:08:00 - AI in content pipelines: Idea generation vs. post-production tasks. 00:10:00 - Open-source builders needing to cater to domain experts (e.g., journalists). 00:12:00 - Meta’s removal of fact-checking and its implications. 00:16:00 - Public tolerance for AI errors (e.g., Apple’s AI summaries). 00:18:00 - Consumer trust shifts away from platforms like Facebook/X. 00:22:00 - Ghostwriting vs. authenticity in AI-generated content. 00:24:00 - Preference for human-curated newsletters over AI summaries. 00:26:00 - AI in news digests (e.g., Perplexity, Alexa). 00:28:00 - Publisher AI experiments (Washington Post chatbot, TIME summaries). 00:32:00 - AI’s impact on click-through rates and publisher economics. 00:34:00 - AI-written articles (e.g., ESPN’s use case) and copyright issues. 00:36:00 - Legal battles over AI training data (NYT vs. OpenAI). 00:38:00 - Copyright concerns with AI-generated outputs. 00:40:00 - AI search tools (Perplexity, ChatGPT) and publisher licensing deals. 00:46:00 - The unhealthy impact of social media trends on journalism. 00:48:00 - Post-interview discussion: Accountability in AI and media. 00:56:00 - Leo’s perspective as a journalist on AI adoption. 00:58:00 - Closing thoughts on balancing AI innovation with industry needs.…
 
In this episode, Dr. Joan Bajorek—AI entrepreneur, author of Your AI Roadmap , and founder of Clarity AI—joins Charna Parkey to talk about what it really takes to build a future in AI. From career pivots and layoff anxiety to financial transparency and finding joy in your work, Joan shares practical advice and personal stories navigating fear, burnout, and career uncertainty in tech, while staying grounded in purpose, community, and long-term resilience. TIMESTAMPS [00:00:00] — Introduction to Joan Bajorek & Her Work [00:02:00] — Transparency About Finances and Career [00:04:00] — The Taboo Around Talking About Money [00:06:00] — Resilience During Tech Layoffs [00:08:00] — How to Get Credit for Your Work [00:12:00] — Should You Chase an AI Job? [00:14:00] — Career Goals vs. Financial Security [00:16:00] — Translating Academic and Life Skills into Tech [00:18:00] — Defining and Finding Joy in Work [00:20:00] — Multiple Income Streams and Personal Freedom [00:24:00] — AI’s Near-Future Impact on Jobs and Industries [00:26:00] — Data and AI Opportunities in Underexplored Domains [00:34:00] — Creating Scalable, Alternative Income Models [00:36:00] — How Joan Maintains Long-Term Motivation [00:42:00] — Post-Interview Discussion QUOTES Joan Bajorek "Networking is how I've gotten the best opportunities and jobs of my life... LinkedIn has this research about how after COVID layoffs, 70% of people landed their next job based on an intro." Charna Parkey "I always try to strive for transparency, and I get such mixed results where at work with coworkers, it's absolutely valued. And then there seems to always be some sort of consequences in my personal life."…
 
Dr. Jason Corso joins Charna Parkey to debate the critical role of data quality, how its transparency shapes AI development and the rise of smaller, domain-specific AI models - making 2025 the year of small, specialized AI. QUOTES Charna Parkey "Knowing the right data is incredibly important, because it'll save you money, but predicting the impact of that data means that you don't have to do the training at all to even directionally know if it's going to work out, right?" Jason Corso "You can't understand and analyze an AI system in the way you can analyze open source software if you don't have access to the data." Timestamps [00:00:00] - Introduction [00:02:00] - Jason Corso’s journey on open source [00:08:00] - The importance of data in AI [00:10:00] - Voxel 51's mission [00:14:00] - The value of open source and the importance of data in AI systems [00:20:00] - Recent discoveries in AI [00:28:00] - The cost of training AI models [00:36:00] - Cooperative AI in healthcare [00:40:00] - Charna Parkey on the impact of AI in education [00:56:00] -The year of small AI…
 
In this episode of Open Source Data, Charna Parkey talks with Alex Gallego, CEO and founder of Redpanda Data, about his journey as a builder, the evolution of Redpanda, and the company's new agent framework for the enterprise. Alex shares insights on low-latency storage, distributed stream processing, and the importance of developer experience to the growth of AI and the Open Source space. Timestamps [00:00:00] Introduction [00:02:00] Alex Gallego talks about his background [00:04:00] Charna Parkey discusses the importance of hands-on experience in learning. [00:06:00] Alex explains the origins of Red Panda and how it emerged from challenges in the streaming space. [00:08:00] Alex details the evolution of Red Panda, its use of C-Star and FlatBuffers, and its low-latency design. [00:11:00] Alex discusses the positioning of Kafka versus Red Panda in the market. [00:20:00] Alex introduces Red Panda's new agent framework and multi-agent orchestration. [00:24:00] Alex explains how Red Panda fits into the evolving landscape of AI-powered applications. [00:30:00] The future of multi-agent orchestration. [00:44:00] Thoughts on AI model training and data retention. [00:46:00] Alex encourages future founders and shares his perspective on risk-taking. [00:50:00] Charna Parkey and Leo Godoy discuss the key takeaways from the conversation with Alex Gallego. [00:52:00] Charna reflects on open source trends and the role of developer experience in adoption. [00:54:00] Charna and Leo talk about the different types of founder journeys and the importance of team dynam Quotes Charna Parkey "For AI, unifying historical and real-time data is critical. If you're just using nightly or monthly data, it doesn’t match the context in which your prediction is being made. So it becomes very important in the future of applying AI because you need to align those things." Alex Gallego "Every app is going to span three layers. The first layer is going to be your operational layer, just like you have to do business right now. Then there always has to be an analytical layer, and the third layer is this layer of autonomy."…
 
In this episode, we dive deep into the world of neuro-symbolic AI with Emin Can Turan, CEO of Pebbles AI. Learn how this technology combines neuroscience, behavioral economics, and AI to revolutionize B2B go-to-market strategies. Emin explains how neuro-symbolic AI bridges the gap between human logic and machine learning, enabling smarter, context-aware systems that democratize complex workflows for startups and enterprises alike. Timestamps [00:00:00] - Introduction by Charna Parkey and introduction of Emin Can Turan. [00:02:00] - Emin’s journey to AI and his background in go-to-market strategies. [00:06:00] - Emin explains his deep R&D phase and the development of neuro-symbolic AI. [00:08:00] - Emin describes the architecture of their AI system, including neuro-symbolic AI, generative AI, and agentic frameworks. [00:10:00] - Explanation of neuro-symbolic AI and its relevance to domain-specific problems. [00:12:00] - Discussion on the components of go-to-market strategies and the role of psychology and communication. [00:16:00] -The limitations of generative AI and how they applied strict communication tactics. [00:22:00] - Discussion on the importance of contextual science and data insights. [00:24:00] - The three agentic frameworks they use in their system. [00:26:00] - Explanation of how users control the product and the two co-pilots (strategy and execution). [00:36:00] - The ethical implications of AI and the potential for misuse. [00:38:00] - Discussion on the future of AI and the balance between dystopian and hopeful outcomes. [00:40:00] - Emin emphasizes the importance of truth and transparency in AI development. [00:42:00] - Emin shares his personal motivation for building his AI startup. [00:48:00] - Closing remarks and discussion on the user experience of their platform. [00:50:00] - Charna and Leo discuss the connection between Emin's work and the open-source community. Quotes Emin Can Turan "I felt that this was the future and that AI was the only technology that can digitalize this level of complexity for everyone to use. Nothing else could, you know, you can't use normal neural networks to do this. Even generative AI is not sufficient enough." Charna Parkey I would love to be able to use Gen AI for more personal things. I love technology. I have the Oura Ring. I've got the Apple Watch. I want to feed that data into something that can somehow tell me and others, here's your state of mind. Here's what you're going to be affected by.…
 
Learn how BrightHive's AI-powered platform is democratizing data insights, making them accessible to non-technical teams across organizations. Suzanne El-Moursi discusses the importance of data fluency and how BrightHive is helping businesses harness the power of their data. Timestamps 00:00:00 - Introduction and Background 00:02:30 - Journey to BrightHive and open source 00:06:00 - The evolution of AI and BrightHive's approach 00:14:00 - The data problem and the role of AI agents 00:22:00 - Building BrightBot with open source frameworks 00:26:00 - The future of AI agents and open source 00:30:00 - People’s reaction to DeepSeek 00:34:00 - The future of work and AI 00:40:00- AI in education and personal growth 00:42:00 - Suzanne’s legacy 00:48:00 -Recap and takeaways with producer Leo Godoy Quotes Charna Parkey "Every single innovation comes out of some form of restriction or need. (...) Don't come and say, “oh, what is this? This is terrible”. I heard all kinds of responses to my excitement and to my belief." Suzanne El-Moursi "So if 97% of an organization is data consumers, there are strategists, the marketing analysts, the customer success associates, the managers all across the enterprise, who need to understand the insights in the company's data, in their functions, in their units, so that they can make the next right step for the customer and for their plan."…
 
Kicking off Open Source Data Season 7, Charna Parkey welcomes the CEO and Founder of Invoke, Kent Keirsey to discuss his thoughts on licensing, copyright in generative AI, and the role of communities in building ethical, free-to-use technologies that can democratize technology and inspire global innovation. Quotes Kent Keirsey "When we look at open source models, if you just release the weights, and you don't really release information on how the data set was captioned, for example, or how you construct the data set, if you don't really know how it got to the artifact that was released, as a user, you do not understand how it works." Charna Parkey But there's still a lot of claims by big tech right now about how anything on the internet should be fair use for training, even if, you know, it might have its own kind of copyright Timestamps [00:02:00] - Kent Keirsey on his journey to open source [00:06:00] - Kent Keirsey on the Open Model Initiative (OMI) [00:08:00] -What makes a model truly open source [00:12:00] - The legal landscape of AI and copyright [00:14:00] - Kent Keirsey on the ethical implications of AI training data fair and use and AI development [00:26:00] Creativity, AI tools, personal AI models and recommendation algorithms: [00:32:00] - Kent Keirsey on TikTok and cultural clash: [00:38:00] - AI, self-reflection and a decision-making tool [00:42:00] - The Bria AI partnership [00:52:00] - The future of creativity, AI and Robotics: [01:00:00] - Final thoughts with producer Leo Godoy Connect with Kent Keirsey Connect with Charna Parkey…
 
Vinay Kumar discusses the transformation of AI in banking and financial services, addressing challenges and solutions with regulatory compliance and model explainability while addressing the stringent requirements in the financial industry. Episode Quotes Vinay Kumar "I always believe in this: you don't need to solve a very large problem. Maybe it will take a lot of time to do that. A lot of resources to do that but something small, which you can have an opportunity to solve that could be very big or a fundamental for quite a bit is fantastic. Think of a scenario where your small fundamental idea is a base for another small fundamental idea for someone else." Charna Parkey We also want to ground it a little bit in impact we've been seeing. And I think in the financial, banking, insurance industries it's not, I would say, an even distribution of advancement. Different countries have different regulations and different appetites for risk." Timestamps - [00:00:00] Introduction by Charna Parkey. - [00:01:57] Vinay Kumar begins talking about his journey. - [00:05:27] Discussion on building a search engine for STEM researchers. - [00:07:06] Challenges with early deep learning. - [00:09:55] Conversation shifts to ML observability. - [00:17:06] Discussion on simplifying verticalized AI. - [00:22:30] Impact of large language models (LLMs) on AI. - [00:30:58] Comparison of autonomous cars with AI regulation. - [00:37:58] Vinay mentions his science fiction novels. - [00:42:19] Conversation summary with Producer Leo Godoy.…
 
Quotes Brian Magerko “We're really trying to show that we could co-create experiences with AI technology that augmented our experience rather than served as something to replace us in creative act”. “For every project like [LuminAI], there's a thousand companies out there just trying to do their best to get our money... That's an uncomfortable place to be in for someone who has worked in AI for decades”. “I had no idea what was going to happen kind of in the future. When we started EarSketch... we were advised by a couple of colleagues to not do it. And here we are, having engaged over a million and a half learners globally”. Charna Parkey "I remember the first robot that I built. It was part of the first robotic systems... and watching these machines work with each other was just crazy." “If you're building a product and your goal is to engage underrepresented groups, it is on you to make sure that you're educating the folks in a way that you're trying to reach.” Episode timestamps (01:11) Brian Magerko's Journey into AI and Robotics (05:00) LuminAI and Human-Machine Collaboration in Dance (09:00) Challenges of AI Literacy and Public Perception (17:32) Explainable AI and Accountability (20:00) The Future of AI and Its Impact on Human Interaction (22:10) EarSketch and learning: computing as a meaningful concept (27:18) The need for interdisciplinary collaboration to ensure AI developments are beneficial for society as a whole. (30:02) Brian Magerko's next reshape of the future, better understanding models of collaboration and improvisation between people and computers (35:51) Brian Magerko's advice to researchers based on his own identity and experiences (44:20) Projects and updates related to EarSketch and LuminAI’s improvisation model. (46:24) Backstage with Executive Producer Leo Godoy…
 
As AI becomes increasingly integrated into business operations, having robust governance structures in place is no longer optional. But what does effective AI governance look like in practice? In this episode, Dr. Heather Domin, a leading expert in AI ethics and governance, breaks down the key components of a successful AI governance framework. Heather guides us through the opportunities and challenges presented by this transformative technology. Learn about the importance of responsible adoption practices, the role of governance structures, the need for ongoing feedback loops and how to align AI initiatives with organizational values, establishing clear accountability, and creating a culture of responsible innovation. Timestamps 00:00:00 - 00:01:23 - Introduction 00:01:23 - 00:04:30 - Heather Domin's Journey 00:09:50 - 00:12:48 - Open Source and AI Ethics 00:12:48 - 00:15:25 - Generative AI and Governance 00:23:40 - 00:26:22 - Future of Responsible AI Practices 00:35:37 - 00:37:31 - Advice for the Audience 00:37:31 - 00:46:04 - Reflection on Risk and Hope in AI Quotes Heather Domin "I think that each of us individually can scan our environment and understand, you know, where can I make an impact? What problem can I help solve? What is the next thing that I can really contribute to?" "There are absolutely ways to automate, you know, the prompt testing and many of the routine tasks that you want to leverage automation in that way so that you can actually have the humans focus on other, other things so they can focus on the critical thinking and outside the box sort of thinking that we want the humans to be focused on." Charna Parkey "I think that it's a hard for people getting into it for the first time to jump to hope if they've experienced something that they should fear in the past. By that, I mean, groups that have been marginalized by other forms of technology are not going to start hopeful with this new one that is is using their data without their permission.." "If for some reason I came to understand in a month what that meant, I should be able to go back and revoke and be like, nope, I actually don't want you to have that anymore. So I think that that would help people feel better." Check Heather's paper: On the ROI of AI Ethics and Governance Investments Connect with Heather Connect with Charna…
 
Ethan Soloviev, Chief Innovation Officer at HowGood, reveals how generative AI can revolutionize the food and agriculture industry. Discover the potential of AI to create a regenerative, sustainable, and net-positive food system that benefits the planet and all living beings. Timestamps 1. Introduction and Background (00:00:00 - 00:01:16) 2. Ethan's Journey (00:01:16 - 00:05:12) 3. The Role of Food and Agriculture (00:05:12 - 00:06:52) 4. Investment in Regenerative Agriculture and Generative AI (00:06:52 - 00:07:44) 5. Levels of AI Impact (00:07:44 - 00:12:42) 6. HowGood's Use of AI (00:12:42 - 00:13:20) 7. Consumer Impact and Corporate Responsibility (00:13:20 - 00:15:44) 8. Future of AI in Food Systems (00:15:44 - 00:20:30) 9. Innovative Perspectives on AI Training (00:20:30 - 00:21:10) 10. Action models in agriculture, optimizing water and soil use on a larger scale. (00:24:14 - 00:25:28) 11. Discussion on integrating human cultural geography into AI models. (00:27:37 - 00:30:00) 12. Charna and Ethan discuss procurement decisions and their impact on sustainability. (00:30:20- 00:40:15) 13. The ethical implications of AI in corporate and government decision-making. (00:42:01 - 00:54:31) 14. Leo brings up the impact of AI on consumers, discussing how AI can change purchasing decisions by highlighting product sustainability. (00:54:40 - 00:55:30) 15. Charna elaborates on using AI to understand different business models and how generational changes affect consumer choices. (00:55:47 - 00:57:32) Quotes Ethan Soloviev "What if we're using ecological data? What if we're training on trees and insects and animals and whale song? What kind of questions would a gen AI trained on whale song and hummingbird language ask us?" Charna Parkey "If we have this great translator that is Gen AI, we already have text and language to code. We can do code generation. We can already interpret this code and tell me what it's going to do. Take that code to language. Why can't we do that with some of these other senses and these other measurements?" Connect with Ethan Connect with Charna…
 
Beth Rudden, recognized as one of the 100 most brilliant leaders in AI ethics, discusses the crucial role of explainability and traceability in building trustworthy AI systems. She shares how Bast AI is using ontologies and knowledge graphs to provide contextual relevance and understanding, enabling humans to fully trust artificial intelligence and how it allows the system to transform fields like education and healthcare. Timestamps 00:00:00 - Intro 00:02:00 - Beth’s Journey 00:19:33​ - Ontologies in AI 00:21:44 - Data Lineage and Provenance 00:32:52 - Open Source Tools 00:38:38​ - Explainable AI 00:44:58- Inspiration from Nature Quotes Beth Rudden: "The best thing that I could tell you that I see is that it's going to shift from more pure mathematical and statistical to much more semantic, more qualitative. Instead of quantity, we're going to have quality." Charna Parkey: "I love that because I've been so mathematical for most of my life. I didn't have a lot of words for the feelings or expressions, right? And so I had sort of this lack of data and the Brené Brown reference you make, like I have many of her books on my shelf and I often pull, I don't even know where it is right now, but the Atlas of the Heart because I am having this feeling and I don't know what it is." Links Connect with Beth Connect with Charna…
 
Learn how Andrea Brown, CEO of Reliabl, is revolutionizing AI by ensuring diverse communities are represented in data annotation. Discover how this approach not only reduces bias but also improves algorithmic performance. Andrea shares insights from her journey as an entrepreneur and AI researcher. Episode timestamps (02:22) Andrea's Career Journey and Experience with Open Source (Adobe, Macromedia, and Alteryx) (11:59) Origins of Alteryx's AI and ML Capabilities / Challenges of Data Annotation and Bias in AI (19:00) Data Transparency & Agency (26:05) Ethical Data Practices (31:00) Open Source Inclusion Algorithms (38:20) Translating AI Governance Policies into Technical Controls (39:00) Future Outlook for AI and ML (42:34) Impact of Diversity Data and Inclusion in Open Source Quotes Andrea Brown "If we get more of this with data transparency, if we're able to include more inputs from marginalized communities into open source data sets, into open source algorithms, then these smaller platforms that maybe can't pay for a custom algorithm can use an algorithm without having to sacrifice inclusion." Charna Parkey “I think if we lift every single platform up, then we'll advance all of the state of the art and I'm excited for that to happen." Connect with Andrea Connect with Charna…
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play