Artwork

Content provided by Chris Hughes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Hughes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
icon Daily Deals

Resilient Cyber w/ Sounil Yu - The Intersection of AI and Need-to-Know

26:41
 
Share
 

Manage episode 464828763 series 2947250
Content provided by Chris Hughes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Hughes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In this episode, we sit down with Sounil Yu, Co-Founder and CTO at Knostic, a security company focusing on need-to-know-based access controls for LLM-based Enterprise AI.

Sounil is a recognized industry security leader and the author of the widely popular Cyber Defense Matrix.

Sounil and I dug into a lot of interesting topics, such as:

  • The latest news with DeepSeek and some of its implications regarding broader AI, cybersecurity, and the AI arms race, most notably between China and the U.S.
  • The different approaches to AI security and safety we’re seeing unfold between the U.S. and EU, with the former being more best-practice and guidance-driven and the latter being more rigorous and including hard requirements.
  • The age-old concept of need-to-know access control, the role it plays, and potentially new challenges implementing it when it comes to LLM’s
  • Organizations rolling out and adopting LLMs and how they can go about implementing least-permissive access control and need-to-know
  • Some of the different security considerations between
  • Some of the work Knostic is doing around LLM enterprise readiness assessments, focusing on visibility, policy enforcement, and remediation of data exposure risks

----------------

Interested in sponsoring an issue of Resilient Cyber?

This includes reaching over 16,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives

Reach out below!

-> Contact Us!

----------------

  continue reading

164 episodes

Artwork
iconShare
 
Manage episode 464828763 series 2947250
Content provided by Chris Hughes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Hughes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

In this episode, we sit down with Sounil Yu, Co-Founder and CTO at Knostic, a security company focusing on need-to-know-based access controls for LLM-based Enterprise AI.

Sounil is a recognized industry security leader and the author of the widely popular Cyber Defense Matrix.

Sounil and I dug into a lot of interesting topics, such as:

  • The latest news with DeepSeek and some of its implications regarding broader AI, cybersecurity, and the AI arms race, most notably between China and the U.S.
  • The different approaches to AI security and safety we’re seeing unfold between the U.S. and EU, with the former being more best-practice and guidance-driven and the latter being more rigorous and including hard requirements.
  • The age-old concept of need-to-know access control, the role it plays, and potentially new challenges implementing it when it comes to LLM’s
  • Organizations rolling out and adopting LLMs and how they can go about implementing least-permissive access control and need-to-know
  • Some of the different security considerations between
  • Some of the work Knostic is doing around LLM enterprise readiness assessments, focusing on visibility, policy enforcement, and remediation of data exposure risks

----------------

Interested in sponsoring an issue of Resilient Cyber?

This includes reaching over 16,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives

Reach out below!

-> Contact Us!

----------------

  continue reading

164 episodes

All episodes

×
 
In this episode, I sit with long-time vulnerability management and data science experts Jay Jacobs and Michael Roytman , who recently co-founded Empirical Security . We dive into the state of vulnerability management, including: How it is difficult to quantify and evaluate the effectiveness of vulnerability prioritization and scoring schemes, such as CVSS, EPSS, KEV, and proprietary vendor prioritization frameworks, and what can be done better Systemic challenges include setbacks in the NIST National Vulnerability Database (NVD) program, the MITRE CVE funding fiasco, and the need for a more resilient vulnerability database and reporting ecosystem. Domain-specific considerations when it comes to vulnerability identifiers and vulnerability management, in areas such as AppSec, Cloud, and Configuration Management, and using data to make more effective decisions The overuse of the term “single pane of glass” and some alternatives Empirical’s innovative approach to “localized” models when it comes to vulnerability management, which takes unique organizational and environmental considerations into play, such as mitigating controls, threats, tooling, and more, and how they are experimenting with this new approach for the industry…
 
In this episode, we sit down with the Co-Founder and CPO of Seemplicity , Ravid Circus , to discuss tackling the prioritization crisis in cybersecurity and how AI is changing vulnerability management. We dove into a lot of great topics, including: The massive challenge of not just finding and managing vulnerabilities but also remediation, with Seemplicity’s Year in Review report finding organizations face 48.6 million vulnerabilities annually and only 1.7 % of them are critical. That still means hundreds of thousands to millions of vulnerabilities need to be remedied - and organizations struggle with this, even with the context of what to prioritize. There’s a lot of excitement around AI in Cyber, including in GRC, SecOps, and, of course, AppSec and vulnerability management. How do you discern between what is hype and what can provide real outcomes? What practical steps can teams take to bridge the gap between AI’s ability to find problems and security teams’ ability to fix them? One of the major issues is determining who is responsible for fixing findings in the space of Remediation Operations, where Seemplicity specializes. Ravid talks about how, both technically and culturally, Seemplicity addresses this challenge of finding the fixer. What lies ahead for Seemplicity this year with RSA and beyond…
 
In this episode, we sit down with Varun Badhwar , Founder and CEO of Endor Labs , to discuss the state of AI for AppSec and move beyond the buzzwords. We discussed the rapid adoption of AI-driven development, its implications for AppSec, and how AppSec can leverage AI to address longstanding challenges and mitigate organizational risks at scale. Varun and I dove into a lot of great topics, such as: The rise of GenAI and LLMs and their broad implications on Cybersecurity The dominant use case of AI-driven development with Copilots and LLM written code, leading to a Developer productivity boost. AppSec has struggled to keep up historically, with vulnerability backlogs getting out of control. What will the future look like now? Studies show that AI-driven development and Copilots don’t inherently produce secure code, and frontier models are primarily trained on open source software, which has vulnerabilities and other risks. What are the implications of this for AppSec? How can AppSec and Cyber leverage AI and agentic workflows to address systemic security challenges? Developers and attackers are both early adopters of this technology. Navigating vulnerability prioritization, dealing with insecure design decisions and addressing factors such as transitive dependencies. The importance of integrating with developer workflows, reducing cognitive disruption and avoiding imposing a “Developer Tax” with legacy processes and tooling from security.…
 
In this episode, we sit down with David Melamed and Shai Horovitz of the Jit team. We discussed Agentic AI for AppSec and how security teams use it to get real work done. We covered a lot of key topics, including: What some of the systemic problems facing AppSec are, even before the widespread adoption of AI, such as vulnerability prioritization, security technical debt and being outnumbered exponentially by Developers. The surge of interest and investment in AI and agentic workflows for AppSec, and why AppSec is an appealing space for this sort of investment and excitement. How the prior wave of AppSec tooling was focused on findings problems, riding the wave of shift left but how this has led to alert fatigue and overload, and how the next-era of AppSec tools will need to focus on not just finding but actually fixing problems. Some of the unique capabilities and features the Jit team has been working on, such as purpose-built agents in areas such as SecOps, AppSec and Compliance, as well as context-graphs with organizational insights to drive effective remediation. The role of Agentic AI and how it will help tackle some of the systemic challenges in the AppSec industry. Addressing concerns around privacy and security when using AI, by leveraging offerings from CSPs and integrating guardrails and controls to mitigate risks.…
 
In this episode, we sit down with Piyush Sharrma, CEO and co-founder of the Tuskira team. They're an AI-powered defense optimization platform innovating around leveraging an Agentic Security Mesh. We will dive into topics such as Platform vs. Point Solutions, Security Tool Sprawl, Alert Fatigue, and how AI can create "intelligent" layers to unify and enhance security tooling ROI. We discussed: What drove Piyush to jump back into the startup space after successfully exiting from a previous startup he helped found The industry debate around Platform vs. Point Solutions or Best-of-Breed and the perspectives between industry industry leaders and innovative startups Dealing with the challenge of alert fatigue security and development teams and the role of AI in reducing cognitive overload and providing insight into organizational risks across tools, tech stacks, and architectures The role of AI in providing intelligence layers or an Agentic Security Mesh across existing security tools and defenses and mitigating organizational risks beyond isolated vulnerability scans by looking at compensating controls, configurations, and more. Shifting security from a reactionary model around incident response and exploitation to a preemptive risk defense model that minimizes attack surface and optimizes existing security investments and architectures…
 
We sit with Lasso Security CEO and Co-Founder Elad Schulman in this episode. Lasso focuses on secure enterprise LLM/GenAI adoption, from LLM Applications, GenAI Chatbots, Code Protection, Model Red Teaming, and more. Check them out at https://lasso.security We dove into a lot of great topics, such as: Dealing with challenges around visibility and governance of AI, much like previous technological waves such as mobile, Cloud, and SaaS Unique security considerations for different paths of using and building with AI, such as self-hosted models and consuming models as-a-service from SaaS LLM providers Potential vulnerabilities and threats associated with AI-driven development products such as Copilots and Coding assistants Software Supply Chain Security (SSCS) risks such as package hallucinations, and both safeguarding the data that goes out to external coding tools, as well as secure consumption of the data coming into the organization Securing AI itself and dealing with risks and threats such as model poisoning and implementing model red teaming Lasso discovered several critical concerns in their AI security research, such as Microsoft’s Copilot exposing thousands of private GitHub repos…
 
In this episode, we sit with security leader and venture investor Sergej Epp to discuss the Cloud-native Security Landscape. Sergej currently serves as the Global CISO and Executive at Cloud Security leader Sysdig and is a Venture Partner at Picus Capital. We will dive into some insights from Sysdig's recent " 2025 Cloud-native Security and Usage Report ." Big shout out to our episode sponsor, Yubico ! Passwords aren’t enough. Cyber threats are evolving, and attackers bypass weak authentication every day. YubiKeys provides phishing-resistant security for individuals and businesses—fast, frictionless, and passwordless. Upgrade your security: https://yubico.com Sergj and I dove into a lot of great topics related to Cloud-native Security, including: Some of the key trends in the latest Sysdig 2025 Cloud-native Security Report and trends that have stayed consistent YoY. Sergj points out that while attackers have stayed consistent, organizations have and continue to make improvements to their security Sergj elaborated on his current role as Sysdig’s internal CISO and his prior role as a field CISO and the differences between the two roles in terms of how you interact with your organization, customers, and the community. We unpacked the need for automated Incident Response, touching on how modern cloud-native attacks can happen in as little as 10 minutes and how organizations can and do struggle without sufficient visibility and the ability to automate their incident response. The report points out that machine identities, or Non-Human Identities (NHI), are 7.5 times riskier than human identities and that there are 40,000 times more of them to manage. This is a massive problem and gap for the industry, and Sergj and I walked through why this is a challenge and its potential risks. Vulnerability prioritization continues to be crucial, with the latest Sysdig report showing that just 6% of vulnerabilities are “in-use”, or reachable. Still, container bloat has ballooned, quintupling in the last year alone. This presents real problems as organizations continue to expand their attack surface with expanded open-source usage but struggle to determine what vulnerabilities truly present risks and need to be addressed. We covered the challenges with compliance, as organizations wrestle with multiple disparate compliance frameworks, and how compliance can drive better security but also can have inverse impacts when written poorly or not keeping pace with technologies and threats. We rounded out the conversation with discussing AI/ML packages and the fact they have grown by 500% when it comes to usage, but organizations have decreased public exposure of AI/ML workloads by 38% since the year prior, showing some improvements are being made to safeguarding AI workloads from risks as well.…
 
In this episode, we sit down with Lior Div and Nate Burke of 7AI to discuss Agentic AI, Service-as-Software, and the future of Cybersecurity. Lior is the CEO/Co-Founder of 7AI and a former CEO/Co-Founder of Cybereason, while Nate brings a background as a CMO with firms such as Axonius, Nagomi, and now 7AI . Lior and Nate bring a wealth of experience and expertise from various startups and industry-leading firms, which made for an excellent conversation. We discussed: The rise of AI and Agentic AI and its implications for cybersecurity. Why the 7AI team chose to focus on SecOps in particular and the importance of tackling toil work to reduce cognitive overload, address workforce challenges, and improve security outcomes. The importance of distinguishing between Human and Non-Human work, and why the idea of eliminating analysts is the wrong approach. Being reactive and leveraging Agentic AI for threat hunting and proactive security activities. The unique culture that comes from having the 7AI team in-person on-site together, allowing them to go from idea to production in a single day while responding quickly to design partners and customer requests. Challenges of building with Agentic AI and how the space is quickly evolving and growing. Key perspectives from Nate as a CMO regarding messaging around AI and getting security to be an early adopter rather than a laggard when it comes to this emerging technology. Insights from Lior on building 7AI compared to his previous role, founding Cybereason, which went on to become an industry giant and leader in the EDR space.…
 
In this episode, we sit down with Investor, Advisor, Board Member, and Cybersecurity Leader Chenxi Wang to discuss the interaction of AI and Cybersecurity, what Agentic AI means for Services-as-a-Software, as well as security in the boardroom Chenxi and I covered a lot of ground, including: When we discuss AI for Cybersecurity, it is usually divided into two categories: AI for Cybersecurity and Securing AI. Chenxi and I walk through the potential for each and which one she finds more interesting at the moment. Chenxi believes LLMs are fundamentally changing the nature of software development, and the industry's current state seems to support that. We discussed what this means for Developers and the cybersecurity implications when LLMs and Copilots create the majority of code and applications. LLMs and GenAI are currently being applied to various cybersecurity areas, such as SecOps, GRC, and AppSec. Chenxi and I unpack which areas AI may have the greatest impact on and the areas we see the most investment and innovation in currently. As mentioned above, there is also the need to secure AI itself, which introduces new attack vectors, such as supply chain attacks, model poisoning, prompt injection, and more. We cover how organizations are currently dealing with these new attack vectors and the potential risks. The biggest buzz of 2025 (and beyond) is Agentic AI or AI Agents, and their potential to disrupt traditional services work represents an outsized portion of cybersecurity spending and revenue. Chenxi envisions a future where Agentic AI and Services-as-a-Software may change what cyber services look like and how cyber activities are conducted within an organization. If you aren’t already following Chenxi Wang on LinkedIn, I strongly recommend you do. I have a lot of connections, but she is someone when I see a post, I am sure to stop and read because she shares a TON of great insights from the boardroom, investment, cyber, startups, AI, and more. I’m thankful to have her on the show to come chat!…
 
In this episode, we sit down with Rob Shavell, CEO and Co-Founder of DeleteMe , an organization focused on safeguarding exposed personal data on the public web and addressing user privacy challenges. We dove into a lot of great topics, such as: The rapidly growing problem of personal data ending up on the public web and some of the major risks many may not think about or realize Trends contributing to personal data exposure, from the Internet itself to social media, mobile phones/apps, IoT devices, COVID, and now AI Where to get started when it comes to taking control of your personal data and privacy Potential abuses and malicious uses for personal data and how threat actors are leveraging it How DeleteMe can help, as well as free resources and DIY guides that individuals can use to mitigate risk associated with their personal data being exposed…
 
In this episode of Resilient Cyber, we sit down with Steve Martano, Partner in the cyber Security Practice at Artico Search, to discuss the recent IANS & Artico Search Publications on the 2025 State of the CISO, security budgets, and broader security career dynamics. Steve and I touched on some great topics, including: The 2025 State of the CISO report and key findings Board reporting cadences for CISO’s and the importance of Boardroom involvement in Cybersecurity The three archetypes of CISO’s: Tactical, Functional and Strategic How security leaders can advance their career to becoming strategic CISO’s as well as key considerations for organziation’s looking to attract and retain their security talent The growing scope of responsibility for CISO roles from not just Infosec but to broader IT, business risk, and digital strategy and implications for CISO’s Security budget trends, spending, macroeconomic factors and allocations Here are a list of some of the great resources from IANS and Artico below on various areas of interest for CISO’s and Security leaders alike! https://www.iansresearch.com/resources/ians-security-budget-benchmark-report https://www.iansresearch.com/resources/ians-ciso-compensation-benchmark-report https://www.iansresearch.com/resources/ians-state-of-the-ciso-report https://www.iansresearch.com/resources/ians-leadership-organization-benchmark-report…
 
In this episode of Resilient Cyber, we catch up with Katie Norton , an Industry Analyst at IDC who focuses on DevSecOps and Software Supply Chain Security. We will dive into all things AppSec, including 2024 trends and analysis and 2025 predictions. Katie and I discussed: Her role with IDC and transition from Research and Data Analytics into being a Cyber and AppSec Industry Analyst and how that background has served her during her new endeavor. Key themes and reflections in AppSec through 2024, including disruption among Software Composition Analysis (SCA) and broader AppSec testing vendors. The age-old Platform vs. Point product debate concerns the iterative and constant cycle of new entrants and innovations that grow, add capabilities, and become platforms or are acquired by larger platform vendors. The cycle continues infinitely. Katie's key research areas for 2025 include Application Security Posture Management (ASPM), Platform Engineering, SBOM Management, and Securing AI Applications. The concept of a “Developer Tax” and the financial and productivity impact legacy security tools and practices are having on organizations while also building silos between us and our Development peers. The role of AI in corrective code fixes and the ability of AI-assisted automated remediation tooling to drive down remediation timelines and vulnerability backlogs. The importance of storytelling, both as an Industry Analyst and in the broader career field of Cybersecurity.…
 
In this episode of Resilient Cyber, Ed Merrett, Director of Security & TechOps at Harmonic Security, will dive into AI Vendor Transparency. We discussed the nuances of understanding models and data and the potential for customer impact related to AI security risks. Ed and I dove into a lot of interesting GenAI Security topics, including: Harmonic’s recent report on GenAI data leakage shows that nearly 10% of all organizational user prompts include sensitive data such as customer information, intellectual property, source code, and access keys. Guardrails and measures to prevent data leakage to external GenAI services and platforms The intersection of SaaS Governance and Security and GenAI and how GenAI is exacerbating longstanding SaaS security challenges Supply chain risk management considerations with GenAI vendors and services, and key questions and risks organizations should be considering Some of the nuances between self-hosted GenAI/LLM’s and external GenAI SaaS providers The role of compliance around GenAI and the different approaches we see between examples such as the EU with the EU AI Act, NIS2, DORA, and more, versus the U.S.-based approach…
 
In this episode, we sit down with Sounil Yu , Co-Founder and CTO at Knostic , a security company focusing on need-to-know-based access controls for LLM-based Enterprise AI. Sounil is a recognized industry security leader and the author of the widely popular Cyber Defense Matrix. Sounil and I dug into a lot of interesting topics, such as: The latest news with DeepSeek and some of its implications regarding broader AI, cybersecurity, and the AI arms race, most notably between China and the U.S. The different approaches to AI security and safety we’re seeing unfold between the U.S. and EU, with the former being more best-practice and guidance-driven and the latter being more rigorous and including hard requirements. The age-old concept of need-to-know access control, the role it plays, and potentially new challenges implementing it when it comes to LLM’s Organizations rolling out and adopting LLMs and how they can go about implementing least-permissive access control and need-to-know Some of the different security considerations between Some of the work Knostic is doing around LLM enterprise readiness assessments, focusing on visibility, policy enforcement, and remediation of data exposure risks ---------------- Interested in sponsoring an issue of Resilient Cyber? This includes reaching over 16,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives Reach out below! -> Contact Us! ----------------…
 
SecOps continues to be one of the most challenging areas of cybersecurity. It involves addressing alert fatigue, minimizing dwell time and meantime-to-respond (MTTR), automating repetitive tasks, integrating with existing tools, and leading to ROI. In this episode, we sit with Grant Oviatt, Head of SecOps at Prophet Security and an experienced SecOps leader, to discuss how AI SOC Analysts are reshaping SecOps by addressing systemic security operations challenges and driving down organizational risks. Grant and I dug into a lot of great topics, such as: Systemic issues impacting the SecOps space include alert fatigue, triage, burnout, staffing shortages, and inability to keep up with threats. What makes SecOps such a compelling niche for Agentic AI, and what key ways can AI help with these systemic challenges? How Agentic AI and platforms such as Prophet Security can aid with key metrics such as SLOs or meantime-to-remediation (MTTR) to drive down organizational risks. Addressing the skepticism around AI, including its use in production operational environments and how the human-in-the-loop still plays a critical role for many organizations. Many organizations are using Managed Detection and Response (MDR) providers as well, and how Agentic AI may augment or replace these existing offerings depending on the organization's maturity, complexity, and risk tolerance. How Prophet Security differs from vendor-native offerings such as Microsoft Co-Pilot and the role of cloud-agnostic offerings for Agentic AI.…
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Quick Reference Guide

Listen to this show while you explore
Play