Content provided by Larry Swanson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Larry Swanson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App Go offline with the Player FM app!
The world often feels rigged. And this episode is a wake-up call to recognize the barriers that exist for those who don’t fit the traditional mold. In this episode, which is a kind of tribute to my dear departed Dad, I recount some powerful lessons from the man who was a brilliant psychiatrist and my biggest champion. He taught me that if something feels off about the environment you’re in, it probably is—and it’s absolutely hella-not your fault. We dare to break into the uncomfortable truth that many workplaces are designed for a very specific demographic, leaving neurodivergent individuals, particularly those on the autism spectrum, feeling excluded. I share three stories in which my Dad imparted to me more than my fair share of his wisdom, and I'm hoping you to can feel empowered. You'll learn that we can advocate for ourselves and others to create a more inclusive work culture. Newsletter Paste this into your browser if the newsletter link is broken - https://www.lbeehealth.com/ Join our Patreon - https://differentnotbrokenpodcast.com/patreon Mentioned in this episode: Sign Up For Our Newsletter Stay updated on all the things! Get added to our newsletter mailing list. Newsletter…
Content provided by Larry Swanson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Larry Swanson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Rebecca Schneider Skills that Rebecca Schneider learned in library science school - taxonomy, ontology, and semantic modeling - have only become more valuable with the arrival of AI technologies like LLMs and the growing interest in knowledge graphs. Two things have stayed constant across her library and enterprise content strategy work: organizational rigor and the need to always focus on people and their needs. We talked about: her work as Co-Founder and Executive Director at AvenueCX, an enterprise content strategy consultancy her background as a "recovering librarian" and her focus on taxonomies, metadata, and structured content the importance of structured content in LLMs and other AI applications how she balances the capabilities of AI architectures and the needs of the humans that contribute to them the need to disambiguate the terms that describe the span of the semantic spectrum the crucial role of organization in her work and how you don't to have formally studied library science to do it the role of a service mentality in knowledge graph work how she measures the efficiency and other benefits of well-organized information how domain modeling and content modeling work together in her work her tech-agnostic approach to consulting the role of metadata strategy into her work how new AI tools permit easier content tagging and better governance the importance of "knowing your collection," not becoming a true subject matter expert but at least getting familiar with the content you are working with the need to clean up your content and data to build successful AI applications Rebecca's bio Rebecca is co-founder of AvenueCX, an enterprise content strategy consultancy. Her areas of expertise include content strategy, taxonomy development, and structured content. She has guided content strategy in a variety of industries: automotive, semiconductors, telecommunications, retail, and financial services. Connect with Rebecca online LinkedIn email: rschneider at avenuecx dot com Video Here’s the video version of our conversation: https://youtu.be/ex8Z7aXmR0o Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 25. If you've ever visited the reference desk at your local library, you've seen the service mentality that librarians bring to their work. Rebecca Schneider brings that same sensibility to her content and knowledge graph consulting. Like all digital practitioners, her projects now include a lot more AI, but her work remains grounded in the fundamentals she learned studying library science: organizational rigor and a focus on people and their needs. Interview transcript Larry: Hi, everyone. Welcome to episode number 25 of the Knowledge Graph Insights podcast. I am really excited today to welcome to the show Rebecca Schneider. Rebecca is the co-founder and the executive director at AvenueCX, a consultancy in the Boston area. Welcome, Rebecca. Tell the folks a little bit more about what you're up to these days. Rebecca: Hi, Larry. Thanks for having me on your show. Hello, everyone. My name is Rebecca Schneider. I am a recovering librarian. I was a trained librarian, worked in a library with actual books, but for most of my career, I have been focusing on enterprise content strategy. Furthermore, I typically focus on taxonomies, metadata, structured content, and all of that wonderful world that we live in. Larry: Yeah, and we both come out of that content background and have sort of converged on the knowledge graph background together kind of over the same time period. And it's really interesting, like those skills that you mentioned, the library science skills of taxonomy, metadata, structured, and then the application of that in structured content in the content world, how, as you've got in more and more into knowledge graph stuff, how has that background, I guess...
Content provided by Larry Swanson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Larry Swanson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Rebecca Schneider Skills that Rebecca Schneider learned in library science school - taxonomy, ontology, and semantic modeling - have only become more valuable with the arrival of AI technologies like LLMs and the growing interest in knowledge graphs. Two things have stayed constant across her library and enterprise content strategy work: organizational rigor and the need to always focus on people and their needs. We talked about: her work as Co-Founder and Executive Director at AvenueCX, an enterprise content strategy consultancy her background as a "recovering librarian" and her focus on taxonomies, metadata, and structured content the importance of structured content in LLMs and other AI applications how she balances the capabilities of AI architectures and the needs of the humans that contribute to them the need to disambiguate the terms that describe the span of the semantic spectrum the crucial role of organization in her work and how you don't to have formally studied library science to do it the role of a service mentality in knowledge graph work how she measures the efficiency and other benefits of well-organized information how domain modeling and content modeling work together in her work her tech-agnostic approach to consulting the role of metadata strategy into her work how new AI tools permit easier content tagging and better governance the importance of "knowing your collection," not becoming a true subject matter expert but at least getting familiar with the content you are working with the need to clean up your content and data to build successful AI applications Rebecca's bio Rebecca is co-founder of AvenueCX, an enterprise content strategy consultancy. Her areas of expertise include content strategy, taxonomy development, and structured content. She has guided content strategy in a variety of industries: automotive, semiconductors, telecommunications, retail, and financial services. Connect with Rebecca online LinkedIn email: rschneider at avenuecx dot com Video Here’s the video version of our conversation: https://youtu.be/ex8Z7aXmR0o Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 25. If you've ever visited the reference desk at your local library, you've seen the service mentality that librarians bring to their work. Rebecca Schneider brings that same sensibility to her content and knowledge graph consulting. Like all digital practitioners, her projects now include a lot more AI, but her work remains grounded in the fundamentals she learned studying library science: organizational rigor and a focus on people and their needs. Interview transcript Larry: Hi, everyone. Welcome to episode number 25 of the Knowledge Graph Insights podcast. I am really excited today to welcome to the show Rebecca Schneider. Rebecca is the co-founder and the executive director at AvenueCX, a consultancy in the Boston area. Welcome, Rebecca. Tell the folks a little bit more about what you're up to these days. Rebecca: Hi, Larry. Thanks for having me on your show. Hello, everyone. My name is Rebecca Schneider. I am a recovering librarian. I was a trained librarian, worked in a library with actual books, but for most of my career, I have been focusing on enterprise content strategy. Furthermore, I typically focus on taxonomies, metadata, structured content, and all of that wonderful world that we live in. Larry: Yeah, and we both come out of that content background and have sort of converged on the knowledge graph background together kind of over the same time period. And it's really interesting, like those skills that you mentioned, the library science skills of taxonomy, metadata, structured, and then the application of that in structured content in the content world, how, as you've got in more and more into knowledge graph stuff, how has that background, I guess...
Charles Ivie Since the semantic web was introduced almost 25 years ago, many have dismissed it as a failure. Charles Ivie shows that the RDF standard and the knowledge-representation technology built on it have actually been quite successful. More than half of the world's web pages now share semantic annotations and the widespread adoption of knowledge graphs in enterprises and media companies is only growing as enterprise AI architectures mature. We talked about: his long work history in the knowledge graph world his observation that the semantic web is "the most catastrophically successful thing which people have called a failure" some of the measures of the success of the semantic web: ubiquitous RDF annotations in web pages, numerous knowledge graph deployments in big enterprises and media companies, etc. the long history of knowledge representation the role of RDF as a Rosetta Stone between human knowledge and computing capabilities how the abstraction that RDF permits helps connect different views of knowledge within a domain the need to scope any ontology in a specific domain the role of upper ontologies his transition from computer science and software engineering to semantic web technologies the fundamental role of knowledge representation tech - to help humans communicate information, to innovate, and to solve problems how semantic modeling's focus on humans working things out leads to better solutions than tech-driven approaches his desire to start a conversation around the fundamental upper principles of ontology design and semantic modeling, and his hypothesis that it might look something like a network of taxonomies Charles' bio Charles Ivie is a Senior Graph Architect with the Amazon Neptune team at Amazon Web Services (AWS). With over 15 years of experience in the knowledge graph community, he has been instrumental in designing, leading, and implementing graph solutions across various industries. Connect with Charles online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/1ANaFs-4hE4 Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 31. Since the concept of the semantic web was introduced almost 25 years ago, many have dismissed it as a failure. Charles Ivie points out that it's actually been a rousing success. From the ubiquitous presence of RDF annotations in web pages to the mass adoption of knowledge graphs in enterprises and media companies, the semantic web has been here all along and only continues to grow as more companies discover the benefits of knowledge-representation technology. Interview transcript Larry: Hi everyone. Welcome to episode number 31 of the Knowledge Graph Insights Podcast. I am really happy today to welcome to the show Charles Ivie. Charles is currently a senior graph architect at Amazon's Neptune department. He's been in the graph community for years, worked at the BBC, ran his own consultancies, worked at places like The Telegraph and The Financial Times and places you've heard of. So welcome Charles. Tell the folks a little bit more about what you're up to these days. Charles: Sure. Thanks. Thanks, Larry. Very grateful to be invited on, so thank you for that. And what have I been up to? Yeah, I've been about in the graph industry for about 14 years or something like that now. And these days I am working with the Amazon Neptune team doing everything I can to help people become more successful with their graph implementations and with their projects. And I like to talk at conferences and join things like this and write as much as I can. And occasionally they let me loose on some code too. So that's kind of what I'm up to these days. Larry: Nice. Because you have a background as a software engineer and we will talk more about that later because I think that's really relevant to a lot of what we'll talk about.…
Andrea Gioia In recent years, data products have emerged as a solution to the enterprise problem of siloed data and knowledge. Andrea Gioia helps his clients build composable, reusable data products so they can capitalize on the value in their data assets. Built around collaboratively developed ontologies, these data products evolve into something that might also be called a knowledge product. We talked about: his work as CTO at Quantyca, a data and metadata management consultancy his description of data products and their lifecycle how the lack of reusability in most data products inspired his current approach to modular, composable data products - and brought him into the world of ontology how focusing on specific data assets facilitates the creation of reusable data products his take on the role of data as a valuable enterprise asset how he accounts for technical metadata and conceptual metadata in his modeling work his preference for a federated model in the development of enterprise ontologies the evolution of his data architecture thinking from a central-governance model to a federated model the importance of including the right variety business stakeholders in the design of the ontology for a knowledge product his observation that semantic model is mostly about people, and working with them to come to agreements about how they each see their domain Andrea's bio Andrea Gioia is a Partner and CTO at Quantyca, a consulting company specializing in data management. He is also a co-founder of blindata.io, a SaaS platform focused on data governance and compliance. With over two decades of experience in the field, Andrea has led cross-functional teams in the successful execution of complex data projects across diverse market sectors, ranging from banking and utilities to retail and industry. In his current role as CTO at Quantyca, Andrea primarily focuses on advisory, helping clients define and execute their data strategy with a strong emphasis on organizational and change management issues. Actively involved in the data community, Andrea is a regular speaker, writer, and author of 'Managing Data as a Product'. Currently, he is the main organizer of the Data Engineering Italian Meetup and leads the Open Data Mesh Initiative. Within this initiative, Andrea has published the data product descriptor open specification and is guiding the development of the open-source ODM Platform to support the automation of the data product lifecycle. Andrea is an active member of DAMA and, since 2023, has been part of the scientific committee of the DAMA Italian Chapter. Connect with Andrea online LinkedIn (#TheDataJoy) Github Video Here’s the video version of our conversation: https://www.youtube.com/watch?v=g34K_kJGZMc Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 30. In the world of enterprise architectures, data products are emerging as a solution to the problem of siloed data and knowledge. As a data and metadata management consultant, Andrea Gioia helps his clients realize the value in their data assets by assembling them into composable, reusable data products. Built around collaboratively developed ontologies, these data products evolve into something that might also be called a knowledge product. Interview transcript Larry: Hi, everyone. Welcome to episode number 30 of the Knowledge Graph Insights podcast. I'm really happy today to welcome to the show Andrea Gioia. Andrea's, he does a lot of stuff. He's a busy guy. He's a partner and the chief technical officer at Quantyca, a consulting firm that works on data and metadata management. He's the founder of Blindata, a SaaS product that goes with his consultancy. I let him talk a little bit more about that. He's the author of the book Managing Data as a Product, and he's also, he comes out of the data heritage but he's now one of these knowledge people like us.…
Dave McComb During the course of his 25-year consulting career, Dave McComb has discovered both a foundational problem in enterprise architectures and the solution to it. The problem lies in application-focused software engineering that results in an inefficient explosion of redundant solutions that draw on overlapping data sources. The solution that Dave has introduced is a data-centric architecture approach that treats data like the precious business asset that it is. We talked about: his work as the CEO of Semantic Arts, a prominent semantic technology and knowledge graph consultancy based in the US the application-centric quagmire that most modern enterprises find themselves trapped in data centricity, the antidote to application centricity his early work in semantic modeling how the discovery of the "core model" in an enterprise facilitates modeling and building data-centric enterprise systems the importance of "baby step" approaches and working with actual customer data in enterprise data projects how building to "enduring business themes" rather than to the needs of individual applications creates a more solid foundation for enterprise architectures his current interest in developing a semantic model for the accounting field, drawing on his history in the field and on Semantic Arts' gist upper ontology the importance of the concept of a "commitment" in an accounting model how his approach to financial modeling permits near-real-time reporting his Data-Centric Architecture Forum, a practitioner-focused event held each June in Ft. Collins, Colorado Dave's bio Dave McComb is the CEO of Semantic Arts. In 2000 he co-founded Semantic Arts with the aim of bringing semantic technology to Enterprises. From 2000- 2010 Semantic Arts focused on ways to improve enterprise architecture through ontology modeling and design. Around 2010 Semantic Arts began helping clients more directly with implementation, which led to the use of Knowledge Graphs in Enterprises. Semantic Arts has conducted over 100 successful projects with a number of well know firms including Morgan Stanley, Electronic Arts, Amgen, Standard & Poors, Schneider-Electric, MD Anderson, the International Monetary Fund, Procter & Gamble, Goldman Sachs as well as a number of government agencies. Dave is the author of Semantics in Business Systems (2003), which made the case for using Semantics to improve the design of information systems, Software Wasteland (2018) which points out how application-centric thinking has led to the deplorable state of enterprise systems and The Data-Centric Revolution (2019) which outlines a alternative to the application-centric quagmire. Prior to founding Semantic Arts he was VP of Engineering for Velocity Healthcare, a dot com startup that pioneered the model driven approach to software development. He was granted three patents on the architecture developed at Velocity. Prior to that he was with a small consulting firm: First Principles Consulting. Prior to that he was part of the problem. Connect with Dave online LinkedIn email: mccomb at semanticarts dot com Semantic Arts Resources mentioned in this interview Dave's books: The Data-Centric Revolution: Restoring Sanity to Enterprise Information Systems Software Wasteland: How the Application-Centric Quagmire is Hobbling Our Enterprises Semantics in Business Systems: The Savvy Manager's Guide gist ontology Data-Centric Architecture Forum Video Here’s the video version of our conversation: https://youtu.be/X_hZG7cFOCE Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 29. Every modern enterprise wrestles with its data, trying to get the most out of it. The smartest businesses have figured out that it isn't just "the new oil" - data is the very bedrock of their enterprise architecture. For the past 25 years, Dave McComb has helped companies understand the...…
Ole Olesen-Bagneux In every enterprise, says Ole Olesen-Bagneux, the information you need to understand your organization's metadata is already there. It just needs to be discovered and documented. Ole's Meta Grid can be as simple as a shared, curated collection of documents, diagrams, and data but might also be expressed as a knowledge graph. Ole appreciates "North Star" architectures like microservices and the Data Mesh but presents the Meta Grid as a simpler way to manage enterprise metadata. We talked about: his work as Chief Evangelist at Actian his forthcoming book, "Fundamentals of Metadata Management" how he defines his Meta Grid: an integration architecture that connects metadata across metadata repositories his definition of metadata and its key characteristic, that it's always in two places at once how the Meta Grid compares with microservices architectures and organizing concepts like Data Mesh the nature of the Meta Grid as a small, simple, and slow architecture which is not technically difficult to achieve his assertion that you can't build a Meta Grid because it already exists in every organization the elements of the Meta Grid: documents, diagrams or pictures, and examples of data how knowledge graphs fit into the Meta Grid his appreciation for "North Star" architectures like Data Mesh but also how he sees the Meta Grid as a more pragmatic approach to enterprise metadata management the evolution of his new book from a knowledge graph book to his elaboration on the "slow" nature of the Meta Grid, in particular how its metadata focus contrasts with faster real-time systems like ERPs the shape of the team topology that makes Meta Grid work Ole's bio Ole Olesen-Bagneux is a globally recognized thought leader in metadata management and enterprise data architecture. As VP, Chief Evangelist at Actian, he drives industry awareness and adoption of modern approaches to data intelligence, drawing on his extensive expertise in data management, metadata, data catalogs, and decentralized architectures. An accomplished author, Ole has written The Enterprise Data Catalog (O’Reilly, 2023). He is currently working on Fundamentals of Metadata Management (O’Reilly, 2025), introducing a novel metadata architecture known as the Meta Grid. With a PhD in Library and Information Science from the University of Copenhagen, his unique perspective bridges traditional information science with modern data management. Before joining Actian, Ole served as Chief Evangelist at Zeenea, where he played a key role in shaping and communicating the company’s technology vision. His industry experience includes leadership roles in enterprise architecture and data strategy at major pharmaceutical companies like Novo Nordisk.Ole is passionate about scalable metadata architectures, knowledge graphs, and enabling organizations to make data truly discoverable and usable. Connect with Ole online LinkedIn Substack Medium Resources mentioned in this interview Fundamentals of Metadata Management, Ole's forthcoming book Data Management at Scale by Piethein Strengholt Fundamentals of Data Engineering by Joe Reis and Matt Housley Meta Grid as a Team Topology, Substack article Stewart Brand's Pace Layers Video Here’s the video version of our conversation: https://youtu.be/t01IZoegKRI Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 28. Every modern enterprise wrestles with the scale, the complexity, and the urgency of understanding their data and metadata. So, by necessity, comprehensive architectural approaches like microservices and the data mesh are complex, big, and fast. Ole Olesen-Bagneux proposes a simple, small, and slow way for enterprises to cultivate a shared understanding of their enterprise knowledge, a decentralized approach to metadata strategy that he calls the Meta Grid. Interview transcript Larry: Hi,…
Andrea Volpini Your organization's brand is what people say about you after you've left the room. It's the memories you create that determine how people think about you later. Andrea Volpini says that the same dynamic applies in marketing to AI systems. Modern brand managers, he argues, need to understand how both human and machine memory work and then use that knowledge to create digital memories that align with how AI systems understand the world. We talked about: his work as CEO at WordLift, a company that builds knowledge graphs to help companies automate SEO and other marketing activities a recent experiment he did during a talk at an AI conference that illustrates the ability of applications like Grok and ChatGPT to build and share information in real time the role of memory in marketing to current AI architectures his discovery of how the agentic approach he was taking to automating marketing tasks was actually creating valuable context for AI systems the mechanisms of memory in AI systems and an analogy to human short- and long-term memory the similarities he sees in how the human neocortex forms memories and how the knowledge about memory is represented in AI systems his practice of representing entities as both triples and vectors in his knowledge graph how he leverages his understanding of the differences in AI models in his work the different types of memory frameworks to account for in both the consumption and creation of AI systems: semantic, episodic, and procedural his new way of thinking about marketing: as a memory-creation process the shift in focus that he thinks marketers need to make, "creating good memories for AI in order to protect their brand values" Andrea's bio Andrea Volpini is the CEO of WordLift and co-founder of Insideout10. With 25 years of experience in semantic web technologies, SEO, and artificial intelligence, he specializes in marketing strategies. He is a regular speaker at international conferences, including SXSW, TNW Conference, BrightonSEO, The Knowledge Graph Conference, G50, Connected Data and AI Festival. Andrea has contributed to industry publications, including the Web Almanac by HTTP Archive. In 2013, he co-founded RedLink GmbH, a commercial spin-off focused on semantic content enrichment, natural language processing, and information extraction. Connect with Andrea online LinkedIn X Bluesky WordLift Video Here’s the video version of our conversation: https://youtu.be/do-Y7w47CZc Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 27. Some experts describe the marketing concept of branding as, What people say about you after you’ve left the room. It's the memories they form of your company that define your brand. Andrea Volpini sees this same dynamic unfolding as companies turn their attention to AI. To build a memorable brand online, modern marketers need to understand how both human and machine memory work and then focus on creating memories that align with how AI systems understand the world. Interview transcript Larry: Hi, everyone. Welcome to episode number 27 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Andrea Volpini. Andrea is the CEO and the founder at WordLift, a company based in Rome. Tell the folks a little bit more about WordLift and what you're up to these days, Andrea. Andrea: Yep. So we build knowledge graphs and to help brands automate their SEO and marketing efforts using large language model and AI in general. Larry: Nice. Yeah, and you're pretty good at this. You've been doing this a while and you had a recent success story, I think that shows, that really highlights some of your current interests in your current work. Tell me about your talk in Milan and the little demonstration you did with that. Andrea: Yeah, yeah, so it was last week at AI Festival,…
Jacobus Geluk The arrival of AI agents creates urgency around the need to guide and govern them. Drawing on his 15-year history in building reliable AI solutions for banks and other enterprises, Jacobus Geluk sees a standards-based data-product marketplace as the key to creating the thriving data economy that will enable AI agents to succeed at scale. Jacobus launched the effort to create the DPROD data-product description specification, creating the supply side of the data market. He's now forming a working group to document the demand side, a "use-case tree" specification to articulate the business needs that data products address. We talked about: his work as CEO at Agnos.ai, an enterprise knowledge graph and AI consultancy the working group he founded in 2023 which resulted in the DPROD specification to describe data products an overview of the data-product marketplace and the data economy the need to account for the demand side of the data marketplace the intent of his current work on to address the disconnect between tech activities and business use cases how the capabilities of LLMs and knowledge graphs complement each other the origins of his "use-case tree" model in a huge banking enterprise knowledge graph he built ten years ago how use case trees improve LLM-driven multi-agent architectures some examples of the persona-driven, tech-agnostic solutions in agent architectures that use-case trees support the importance of constraining LLM action with a control layer that governs agent activities, accounting for security, data sourcing, and issues like data lineage and provenance the new Use Case Tree Work Group he is forming the paradox in the semantic technology industry now of a lack of standards in a field with its roots in W3C standards Jacobus' bio Jacobus Geluk is a Dutch Semantic Technology Architect and CEO of agnos.ai, a UK-based consulting firm with a global team of experts specializing in GraphAI — the combination of Enterprise Knowledge Graphs (EKG) with Generative AI (GenAI). Jacobus has over 20 years of experience in data management and semantic technologies, previously serving as a Senior Data Architect at Bloomberg and Fellow Architect at BNY Mellon, where he led the first large-scale production EKG in the financial industry. As a founding member and current co-chair of the Enterprise Knowledge Graph Forum (EKGF), Jacobus initiated the Data Product Workgroup, which developed the Data Product Ontology (DPROD) — a proposed OMG standard for consistent data product management across platforms. Jacobus can claim to have coined the term "Enterprise Knowledge Graph (EKG)" more than 10 years ago, and his work has been instrumental in advancing semantic technologies in financial services and other information-intensive industries. Connect with Jacobus online LinkedIn Agnos.ai Resources mentioned in this podcast DPROD specification Enterprise Knowledge Graph Forum Object Management Group Use Case Tree Method for Business Capabilities DCAT Data Catalog Vocabulary Video Here’s the video version of our conversation: https://youtu.be/J0JXkvizxGo Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 26. In an AI landscape that will soon include huge groups of independent software agents acting on behalf of humans, we'll need solid mechanisms to guide the actions of those agents. Jacobus Geluk looks at this situation from the perspective of the data economy, specifically the data-products marketplace. He helped develop the DPROD specification that describes data products and is now focused on developing use-case trees that describe the business needs that they address. Interview transcript Larry: Okay. Hi everyone. Welcome to episode number 26 of the Knowledge Graph Insights podcast. I am really happy today to welcome to the show, Jacobus Geluk. Sorry, I try to speak Dutch, do my best.…
Rebecca Schneider Skills that Rebecca Schneider learned in library science school - taxonomy, ontology, and semantic modeling - have only become more valuable with the arrival of AI technologies like LLMs and the growing interest in knowledge graphs. Two things have stayed constant across her library and enterprise content strategy work: organizational rigor and the need to always focus on people and their needs. We talked about: her work as Co-Founder and Executive Director at AvenueCX, an enterprise content strategy consultancy her background as a "recovering librarian" and her focus on taxonomies, metadata, and structured content the importance of structured content in LLMs and other AI applications how she balances the capabilities of AI architectures and the needs of the humans that contribute to them the need to disambiguate the terms that describe the span of the semantic spectrum the crucial role of organization in her work and how you don't to have formally studied library science to do it the role of a service mentality in knowledge graph work how she measures the efficiency and other benefits of well-organized information how domain modeling and content modeling work together in her work her tech-agnostic approach to consulting the role of metadata strategy into her work how new AI tools permit easier content tagging and better governance the importance of "knowing your collection," not becoming a true subject matter expert but at least getting familiar with the content you are working with the need to clean up your content and data to build successful AI applications Rebecca's bio Rebecca is co-founder of AvenueCX, an enterprise content strategy consultancy. Her areas of expertise include content strategy, taxonomy development, and structured content. She has guided content strategy in a variety of industries: automotive, semiconductors, telecommunications, retail, and financial services. Connect with Rebecca online LinkedIn email: rschneider at avenuecx dot com Video Here’s the video version of our conversation: https://youtu.be/ex8Z7aXmR0o Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 25. If you've ever visited the reference desk at your local library, you've seen the service mentality that librarians bring to their work. Rebecca Schneider brings that same sensibility to her content and knowledge graph consulting. Like all digital practitioners, her projects now include a lot more AI, but her work remains grounded in the fundamentals she learned studying library science: organizational rigor and a focus on people and their needs. Interview transcript Larry: Hi, everyone. Welcome to episode number 25 of the Knowledge Graph Insights podcast. I am really excited today to welcome to the show Rebecca Schneider. Rebecca is the co-founder and the executive director at AvenueCX, a consultancy in the Boston area. Welcome, Rebecca. Tell the folks a little bit more about what you're up to these days. Rebecca: Hi, Larry. Thanks for having me on your show. Hello, everyone. My name is Rebecca Schneider. I am a recovering librarian. I was a trained librarian, worked in a library with actual books, but for most of my career, I have been focusing on enterprise content strategy. Furthermore, I typically focus on taxonomies, metadata, structured content, and all of that wonderful world that we live in. Larry: Yeah, and we both come out of that content background and have sort of converged on the knowledge graph background together kind of over the same time period. And it's really interesting, like those skills that you mentioned, the library science skills of taxonomy, metadata, structured, and then the application of that in structured content in the content world, how, as you've got in more and more into knowledge graph stuff, how has that background, I guess...…
Ashleigh Faith With her 15-year history in the knowledge graph industry and her popular YouTube channel, Ashleigh Faith has informed and inspired a generation of graph practitioners and enthusiasts. She's an expert on semantic modeling, knowledge graph construction, and AI architectures and talks about those concepts in ways that resonate both with her colleagues and with newcomers to the field. We talked about: her popular IsA DataThing YouTube channel the crucial role of accurately modeling actual facts in semantic practice and AI architectures her appreciation of the role of knowledge graphs in aligning people in large organizations around concepts and the various words that describe them the importance of staying focused on the business case for knowledge graph work, which has become both more important with the arrival of LLMs and generative AI the emergence of more intuitive "talk to your graph" interfaces some of her checklist items for onboarding aspiring knowledge graph engineers how to decide whether to use a property graph or a knowledge graph, or both her hope that more RDF graph vendors will offer a free tier so that people can more easily experiment with them approaches to AI architecture orchestration the enduring importance of understanding how information retrieval works Ashleigh's bio Ashleigh Faith has her PhD in Advanced Semantics and over 15 years of experience working on graph solutions across the STEM, government, and finance industries. Outside of her day-job, she is the Founder and host of the IsA DataThing YouTube channel and podcast where she tries to demystify the graph space. Connect with Ashleigh online LinkedIn IsA DataThing YouTube channel Video Here’s the video version of our conversation: https://youtu.be/eMqLydDu6oY Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 24. One way to understand the entity resolution capabilities of knowledge graphs is to picture on old-fashioned telephone operator moving plugs around a switchboard to make the right connections. Early in her career, that's one way that Ashleigh Faith saw the power of knowledge graphs. She has since developed sophisticated approaches to knowledge graph construction, semantic modeling, and AI architectures and shares her deeply informed insights on her popular YouTube channel. Interview transcript Larry: Hi, everyone. Welcome to episode number 24 of the Knowledge Graph Insights Podcast. I am super extra delighted today to welcome to the show Ashleigh Faith. Ashleigh is the host of the awesome YouTube channel IsA DataThing, which has thousands of subscribers, thousands of monthly views. I think it's many people's entry point into the knowledge graph world. Welcome, Ashleigh. Great to have you here. Tell the folks a little bit more about what you're up to these days. Ashleigh: Thanks, Larry. I've known you for quite some time. I'm really excited to be here today. What about me? I do a lot of semantic and AI stuff for my day job. But yeah, I think my main passion is also helping others get involved, understand some of the concepts a little bit better for the semantic space and now the neuro-symbolic AI. That's AI and knowledge graphs coming together. That is quite a hot topic right now, so lots and lots of untapped potential in what we can talk about. I do most of that on my channel. Larry: Yeah. I will refer people to your channel because we've got only a half-hour today. It's ridiculous. Ashleigh: Yeah. Larry: We just talked for an hour before we went on the air. It's ridiculous. What I'd really like to focus on today is the first stage in any of this, the first step in any of these knowledge graph implementations or any of this stuff is modeling. I think about it from a designerly perspective. I do a lot of mental model discernment, user research kind of stuff, and then conceptual modeling to agree on things.…
Panos Alexopoulos Any knowledge graph or other semantic artifact must be modeled before it's built. Panos Alexopoulos has been building semantic models since 2006. In 2020, O'Reilly published his book on the subject, "Semantic Modeling for Data." The book covers the craft of semantic data modeling, the pitfalls practitioners are likely to encounter, and the dilemmas they'll need to overcome. We talked about: his work as Head of Ontology at Textkernel and his 18-year history working with symbolic AI and semantic modeling his definition and description of the practice of semantic modeling and its three main characteristics: accuracy, explicitness, and agreement the variety of artifacts that can result from semantic modeling: database schemas, taxonomies, hierarchies, glossaries, thesauri, ontologies, etc. the difference between identifying entities with human understandable descriptions in symbolic AI and numerical encodings in sub-symbolic AI the role of semantic modeling in RAG and other hybrid AI architectures a brief overview of data modeling as a practice how LLMs fit into semantic modeling: as sources of information to populate a knowledge graph, as coding assistants, and in entity and relation extraction other techniques besides NLP and LLMs that he uses in his modeling practice: syntactic patterns, heuristics, regular expressions, etc. the role of semantic modeling and symbolic AI in emerging hybrid AI architectures the importance of defining the notion of "autonomy" as AI agents emerge Panos' bio Panos Alexopoulos has been working since 2006 at the intersection of data, semantics and software, contributing in building intelligent systems that deliver value to business and society. Born and raised in Athens, Greece, Panos currently works as a principal educator at OWLTECH, developing and delivering training workshops that provide actionable knowledge and insights for data and AI practitioners. He also works as Head of Ontology at Textkernel BV, in Amsterdam, Netherlands, leading a team of data professionals in developing and delivering a large cross-lingual Knowledge Graph in the HR and Recruitment domain. Panos has published several papers at international conferences, journals and books, and he is a regular speaker in both academic and industry venues. He is also the author of the O’Reilly book “Semantic Modeling for Data – Avoiding Pitfalls and Dilemmas”, a practical and pragmatic field guide for data practitioners that want to learn how semantic data modeling is applied in the real world. Connect with Panos online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/ENothdlfYGA Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 23. In order to build a knowledge graph or any other semantic artifact, you first need to model the concepts you're working with, and that model needs to be accurate, to explicitly represent all of the ideas you're working with, and to capture human agreements about them. Panos Alexopoulos literally wrote the book on semantic modeling for data, covering both the principles of modeling as well as the pragmatic concerns of real-world modelers. Interview transcript Larry: Hi everyone. Welcome to episode number 23 of the Knowledge Graph Insights podcast. I am really excited today to welcome to the show Panos Alexopoulos. Panos is the head of ontology at Textkernel, a company in Amsterdam that works on knowledge graphs for the HR and recruitment world. Welcome, Panos. Tell the folks a little bit more about what you're doing these days. Panos: Hi Larry. Thank you very much for inviting me to your podcast. I'm really happy to be here. Yeah, so as you said, I'm head of ontology at Textkernel. Actually, I've been working in the field of data semantics, knowledge graph ontologies for almost now 18 years, even before the era of machine learning,…
Mike Pool Mike Pool sees irony in the fact that semantic-technology practitioners struggle to use the word "semantics" in ways that meaningfully advance conversations about their knowlege-representation work. In a recent LinkedIn post, Mike even proposed a moratorium on the use of the word. We talked about: his multi-decade career in knowledge representation and ontology practice his opinion that we might benefit from a moratorium on the term "semantics" the challenges in pinning down the exact scope of semantic technology how semantic tech permits reusability and enables scalability the balance in semantic practice between 1) ascribing meaning in tech architectures independent of its use in applications and 2) considering end-use cases the importance of staying domain-focused as you do semantic work how to stay pragmatic in your choice of semantic methods how reification of objects is not inherently semantic but does create a framework for discovering meaning how to understand and capture subtle differences in meaning of seemingly clear terms like "merger" or "customer" how LLMs can facilitate capturing meaning Mike's bio Michael Pool works in the Office of the CTO at Bloomberg, where he is working on a tool to create and deploy ontologies across the firm. Previously, he was a principal ontologist on the Amazon Product Knowledge team, and has also worked to deploy semantic technologies/approaches and enterprise knowledge graphs at a number of big banks in New York City. Michael also spent a couple of years on the famous Cyc project and has evaluated knowledge representation technologies for DARPA. He has also worked on tooling to integrate probabilistic and semantic models and oversaw development of an ontology to support a consumer-facing semantic search engine. He lives in New York City and loves to run around in circles in Central Park. Connect with Mike online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/JlJjBWGwSDg Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 22. The word "semantics" is often used imprecisely by semantic-technology practitioners. It can describe a wide array of knowledge-representation practices, from simple glossaries and taxonomies to full-blown enterprise ontologies, any of which may be summarized in a conversation as "semantics." Mike Pool thinks that this dynamic - using a word that lacks precise meaning while assuming that it communicates a lot - may justify a moratorium on the use of the term. Interview transcript Larry: Hi everyone, welcome to episode number 22 of the Knowledge Graph Insights podcast. I'm really happy today to welcome to the show Mike Pool. Mike is a longtime ontologist, a couple of decades plus. He recently took a position at Bloomberg. But he made this really provocative post on LinkedIn lately that I want to flesh out today, and we'll talk more about that throughout the rest of the show. Welcome, Mike, tell the folks a little bit more about what you're up to these days. Mike: Hey, thank you, Larry. Yeah. As you noted, I've just taken a position with Bloomberg and for these many years that you alluded to, I've been very heavily focused on building, doing knowledge representation in general. In the last let's say decade or so I've been particularly focused on using ontologies and knowledge graphs in large banks, or large organizations at least, to help organize disparate data, to make it more accessible, breakdown data silos, et cetera. It's particularly relevant in the finance industry where things can be sliced and diced in so many different ways. I find there's a really important use case in the financial space but in large organizations in general, in my opinion, for using ontology. So that's a lot of what I've been thinking about, to make that more accessible to the organization and to help them build these ontologies and utilize th...…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.