
Player FM - Internet Radio Done Right
43 subscribers
Checked 9h ago
Added five years ago
Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED
B
B2B Agility with Greg Kihlström™: MarTech, E-Commerce, & Customer Success


1 #61: How agentic AI can transform B2B event marketing with Travis Cushing, CPO of RainFocus 23:19
23:19
Play Later
Play Later
Lists
Like
Liked23:19
If you had to bet on one technology that will fundamentally reshape B2B marketing in the next five years what would it be, and would it have “agentic” and “AI” in the name? Agility requires not just reacting to technological advancements like agentic AI, but proactively experimenting with them and adapting your strategies based on real-world results. It also requires organizational buy-in and a willingness to challenge established norms. Today, we're going to talk about the rise of agentic AI and its potential to revolutionize B2B marketing, particularly in event marketing. To help me discuss this topic, I'd like to welcome, Travis Cushing, Chief Product Officer at RainFocus. About Travis Cushing As Chief Product Officer at RainFocus, Travis Cushing brings over 20 years of experience in the events and SaaS industries, blending product strategy, customer empathy, and technical execution. With deep expertise in event technology, product-led growth, and integrated data-driven experiences, he is committed to transforming event workflows into strategic business drivers. His passion for people and experiences has built bridges between customers and development allowing RainFocus to execute and build great software. Travis Cushing on LinkedIn: https://www.linkedin.com/in/travis-cushing/ Resources RainFocus: https://www.rainfocus.com Don't Miss MAICON 2025, October 14-16 in Cleveland - the event bringing together the brights minds and leading voices in AI. Use Code AGILE150 for $150 off registration. Go here to register: https://bit.ly/agile150 Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom Don't miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company…
EP199 Your Cloud IAM Top Pet Peeves (and How to Fix Them)
Manage episode 450805811 series 2892548
Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Guests:
- Michele Chubirka, Staff Cloud Security Advocate, Google Cloud
- Sita Lakshmi Sangameswaran, Senior Developer Relations Engineer, Google Cloud
Topics:
- What is your reaction to “in the cloud you are one IAM mistake away from a breach”? Do you like it or do you hate it? Or do you "it depends" it? :-)
- Everyone's talking about how "identity is the new perimeter" in the cloud. Can you break that down in simple terms?
- A lot of people say “in the cloud, you must do IAM ‘right’”. What do you think that means? What is the first or the main idea that comes to your mind when you hear it?
- What’s this stuff about least-privilege and separation-of-duties being less relevant? Why do they matter in the cloud that changes rapidly?
- What are your IAM Top Pet Peeves?
Resources:
- Video (LinkedIn, YouTube)
- EP127 Is IAM Really Fun and How to Stay Ahead of the Curve in Cloud IAM?
- EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler
- IAM: There and back again using resource hierarchies
- IAM so lost: A guide to identity in Google Cloud
- I Hate IAM: but I need it desperately
- EP33 Cloud Migrations: Security Perspectives from The Field
- EP176 Google on Google Cloud: How Google Secures Its Own Cloud Use
- EP177 Cloud Incident Confessions: Top 5 Mistakes Leading to Breaches from Mandiant
- EP188 Beyond the Buzzwords: Identity's True Role in Cloud and SaaS Security
- “Identity Crisis: The Biggest Prize in Security” paper
- “Learn to love IAM: The most important step in securing your cloud infrastructure“ Next presentation
245 episodes
Manage episode 450805811 series 2892548
Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Guests:
- Michele Chubirka, Staff Cloud Security Advocate, Google Cloud
- Sita Lakshmi Sangameswaran, Senior Developer Relations Engineer, Google Cloud
Topics:
- What is your reaction to “in the cloud you are one IAM mistake away from a breach”? Do you like it or do you hate it? Or do you "it depends" it? :-)
- Everyone's talking about how "identity is the new perimeter" in the cloud. Can you break that down in simple terms?
- A lot of people say “in the cloud, you must do IAM ‘right’”. What do you think that means? What is the first or the main idea that comes to your mind when you hear it?
- What’s this stuff about least-privilege and separation-of-duties being less relevant? Why do they matter in the cloud that changes rapidly?
- What are your IAM Top Pet Peeves?
Resources:
- Video (LinkedIn, YouTube)
- EP127 Is IAM Really Fun and How to Stay Ahead of the Curve in Cloud IAM?
- EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler
- IAM: There and back again using resource hierarchies
- IAM so lost: A guide to identity in Google Cloud
- I Hate IAM: but I need it desperately
- EP33 Cloud Migrations: Security Perspectives from The Field
- EP176 Google on Google Cloud: How Google Secures Its Own Cloud Use
- EP177 Cloud Incident Confessions: Top 5 Mistakes Leading to Breaches from Mandiant
- EP188 Beyond the Buzzwords: Identity's True Role in Cloud and SaaS Security
- “Identity Crisis: The Biggest Prize in Security” paper
- “Learn to love IAM: The most important step in securing your cloud infrastructure“ Next presentation
245 episodes
All episodes
×C
Cloud Security Podcast by Google

1 EP244 The Future of SOAPA: Jon Oltsik on Platform Consolidation vs. Best-of-Breed in the Age of Agentic AI 27:32
27:32
Play Later
Play Later
Lists
Like
Liked27:32
Guest: Jon Oltsik , security researcher, ex-ESG analyst Topics: You invented the concept of SOAPA – Security Operations & Analytics Platform Architecture. As we look towards SOAPA 2025, how do you see the ongoing debate between consolidating security around a single platform versus a more disaggregated, best-of-breed approach playing out? What are the key drivers for either strategy in today's complex environments? How can we have both “ decoupling ” and platformization going at the same time? With all the buzz around Generative AI and Agentic AI, how do you envision these technologies changing the future of the Security Operations Center (and SOAPA of course)? Where do you see AI really work today in the SOC and what is the proof of that actually happening? What does a realistic "AI SOC" look like in the next few years, and what are the practical implications for security teams? “Integration” is always a hot topic in security - and it has been for decades. Within the context of SOAPA and the adoption of advanced analytics, where do you see the most critical integration challenges today – whether it's vendor-centric ecosystems, strategic partnerships, or the push for open standards? Resources: Jon Oltsik “The Cybersecurity Bridge” podcast ( Anton on it ) EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP242 The AI SOC: Is This The Automation We've Been Waiting For? EP202 Beyond Tiered SOCs: Detection as Code and the Rise of Response Engineering EP180 SOC Crossroads: Optimization vs Transformation - Two Paths for Security Operations Center EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC EP73 Your SOC Is Dead? Evolve to Output-driven Detect and Respond! Daniel Suarez “Daemon” book and its sequel “Delta V”…
C
Cloud Security Podcast by Google

1 EP243 Email Security in the AI Age: An Epic 2025 Arms Race Begins 29:00
29:00
Play Later
Play Later
Lists
Like
Liked29:00
Guest: Cy Khormaee , CEO, AegisAI Ryan Luo , CTO, AegisAI Topics: What is the state of email security in 2025? Why start an email security company now? Is it true that there are new and accelerating AI threats to email? It sounds cliche, but do you really have to use good AI to fight bad AI? What did you learn from your time fighting abuse at scale at Google that is helping you now How do you see the future of email security and what role will AI play? Resources: aegisai.ai EP40 2021: Phishing is Solved? EP41 Beyond Phishing: Email Security Isn't Solved EP28 Tales from the Trenches: Using AI for Gmail Security EP50 The Epic Battle: Machine Learning vs Millions of Malicious Documents…
C
Cloud Security Podcast by Google

1 EP242 The AI SOC: Is This The Automation We've Been Waiting For? 34:01
34:01
Play Later
Play Later
Lists
Like
Liked34:01
Guest: Augusto Barros , Principal Product Manager, Prophet Security , ex-Gartner analyst Topics: What is your definition of “AI SOC”? What will AI change in a SOC? What will the post-AI SOC look like? What are the primary mechanisms by which AI SOC tools reduce attacker dwell time, and what challenges do they face in maintaining signal fidelity? Why would this wave of SOC automation (namely, AI SOC) work now, if it did not fully succeed before (SOAR)? How do we measure progress towards AI SOC? What gets better at what time? How would we know? What SOC metrics will show improvement? What common misconceptions or challenges have organizations encountered during the initial stages of AI SOC adoption, and how can they be overcome? Do you have a timeline for SOC AI adoption? Sure, everybody wants AI alerts triage? What’s next? What's after that? Resources: “State of AI in Security Operations 2025” report LinkedIn SOAR vs AI SOC argument post Are AI SOC Solutions the Real Deal or Just Hype? EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP238 Google Lessons for Using AI Agents for Securing Our Enterprise EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check “Noise: A flaw in human judgement” book “Security Chaos Engineering” book (and Kelly episode ) A Brief Guide for Dealing with ‘Humanless SOC’ Idiots…
C
Cloud Security Podcast by Google

1 EP241 From Black Box to Building Blocks: More Modern Detection Engineering Lessons from Google 31:33
31:33
Play Later
Play Later
Lists
Like
Liked31:33
Guest: Rick Correa ,Uber TL Google SecOps, Google Cloud Topics: On the 3rd anniversary of Curated Detections, you've grown from 70 rules to over 4700. Can you walk us through that journey? What were some of the key inflection points and what have been the biggest lessons learned in scaling a detection portfolio so massively? Historically the SecOps Curated Detection content was opaque, which led to, understandably, a bit of customer friction. We’ve recently made nearly all of that content transparent and editable by users. What were the challenges in that transition? You make a distinction between "Detection-as-Code" and a more mature "Software Engineering" paradigm. What gets better for a security team when they move beyond just version control and a CI/CD pipeline and start incorporating things like unit testing, readability reviews, and performance testing for their detections? The idea of a "Goldilocks Zone" for detections is intriguing – not too many, not too few. How do you find that balance, and what are the metrics that matter when measuring the effectiveness of a detection program? You mentioned customer feedback is important, but a confusion matrix isn't possible, why is that? You talk about enabling customers to use your "building blocks" to create their own detections. Can you give us a practical example of how a customer might use a building block for something like detecting VPN and Tor traffic to augment their security? You have started using LLMs for reviewing the explainability of human-generated metadata. Can you expand on that? What have you found are the ripe areas for AI in detection engineering, and can you share any anecdotes of where AI has succeeded and where it has failed? Resources EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP231 Beyond the Buzzword: Practical Detection as Code in the Enterprise EP181 Detection Engineering Deep Dive: From Career Paths to Scaling SOC Teams EP139 What is Chronicle? Beyond XDR and into the Next Generation of Security Operations EP123 The Good, the Bad, and the Epic of Threat Detection at Scale with Panther “Back to Cooking: Detection Engineer vs Detection Consumer, Again?” blog “On Trust and Transparency in Detection” blog “Detection Engineering Weekly” newsletter “Practical Threat Detection Engineering” book…
C
Cloud Security Podcast by Google

1 EP240 Cyber Resiliency for the Rest of Us: Making it Happen on a Real-World Budget 29:25
29:25
Play Later
Play Later
Lists
Like
Liked29:25
Guest: Errol Weiss , Chief Security Officer (CSO) at Health-ISAC Topics: How adding digital resilience is crucial for enterprises? How to make the leaders shift from “just cybersecurity“ to “digital resilience”? How to be the most resilient you can be given the resources? How to be the most resilient with the least amount of money? How to make yourself a smaller target? Smaller target measures fit into what some call “basics.” But “Basic” hygiene is actually very hard for many. What are your top 3 hygiene tips for making it happen that actually work? We are talking about under-resources orgs, but some are much more under-resourced, what is your advice for those with extreme shortage of security resources? Assessing vendor security - what is most important to consider today in 2025? How not to be hacked via your vendor? Resources: ISAC history (1998 PDD 63) CISA Known Exploited Vulnerabilities Catalog Brian Krebs blog Health-ISAC Annual Threat Report Health-ISAC Home Health Sector Coordinating Council Publications Health Industry Cybersecurity Practices 2023 HHS Cyber Performance Goals (CPGs) 10 ways to make cyber-physical systems more resilient EP193 Inherited a Cloud? Now What? How Do I Secure It? EP65 Is Your Healthcare Security Healthy? Mandiant Incident Response Insights EP49 Lifesaving Tradeoffs: CISO Considerations in Moving Healthcare to Cloud EP233 Product Security Engineering at Google: Resilience and Security EP204 Beyond PCAST: Phil Venables on the Future of Resilience and Leading Indicators…
C
Cloud Security Podcast by Google

1 EP239 Linux Security: The Detection and Response Disconnect and Where Is My Agentless EDR 25:29
25:29
Play Later
Play Later
Lists
Like
Liked25:29
Guest: Craig H. Rowland , Founder and CEO, Sandfly Security Topics: When it comes to Linux environments – spanning on-prem, cloud, and even–gasp–hybrid setups – where are you seeing the most significant blind spots for security teams today? There's sometimes a perception that Linux is inherently more secure or less of a malware target than Windows. Could you break down some of the fundamental differences in how malware behaves on Linux versus Windows, and why that matters for defenders in the cloud? 'Living off the Land' isn't a new concept, but on Linux, it feels like attackers have a particularly rich set of native tools at their disposal. What are some of the more subtly abused but legitimate Linux utilities you're seeing weaponized in cloud attacks, and how does that complicate detection? When you weigh agent-based versus agentless monitoring in cloud and containerized Linux environments, what are the operational trade-offs and outcome trade-offs security teams really need to consider? SSH keys are the de facto keys to the kingdom in many Linux environments. Beyond just 'use strong passphrases,' what are the critical, often overlooked, risks associated with SSH key management, credential theft, and subsequent lateral movement that you see plaguing organizations, especially at scale in the cloud? What are the biggest operational hurdles teams face when trying to conduct incident response effectively and rapidly across such a distributed Linux environment, and what's key to overcoming them? Resources: EP194 Deep Dive into ADR - Application Detection and Response EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines…
C
Cloud Security Podcast by Google

1 EP238 Google Lessons for Using AI Agents for Securing Our Enterprise 31:40
31:40
Play Later
Play Later
Lists
Like
Liked31:40
Guest: Dominik Swierad , Senior PM D&R AI and Sec-Gemini Topics: When introducing AI agents to security teams at Google, what was your initial strategy to build trust and overcome the natural skepticism? Can you walk us through the very first conversations and the key concerns that were raised? With a vast array of applications, how did you identify and prioritize the initial use cases for AI agents within Google's enterprise security? What specific criteria made a use case a good candidate for early evaluation? Were there any surprising 'no-go' areas you discovered?" Beyond simple efficiency gains, what were the key metrics and qualitative feedback mechanisms you used to evaluate the success of the initial AI agent deployments? What were the most significant hurdles you faced in transitioning from successful pilots to broader adoption of AI agents? How do you manage the inherent risks of autonomous agents, such as potential for errors or adversarial manipulation, within a live and critical environment like Google's? How has the introduction of AI agents changed the day-to-day responsibilities and skill requirements for Google's security engineers? From your unique vantage point of deploying defensive AI agents, what are your biggest concerns about how threat actors will inevitably leverage similar technologies? Resources: EP235 The Autonomous Frontier: Governing AI Agents from Code to Courtroom EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps EP227 AI-Native MDR: Betting on the Future of Security Operations? EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil…
C
Cloud Security Podcast by Google

1 EP237 Making Security Personal at the Speed and Scale of TikTok 28:40
28:40
Play Later
Play Later
Lists
Like
Liked28:40
Guest: Kim Albarella , Global Head of Security, TikTok Questions: Security is part of your DNA. In your day to day at TikTok, what are some tips you’d share with users about staying safe online? Many regulations were written with older technologies in mind. How do you bridge the gap between these legacy requirements and the realities of a modern, microservices-based tech stack like TikTok's, ensuring both compliance and agility? You have a background in compliance and risk management. How do you approach demonstrating the effectiveness of security controls, not just their existence, especially given the rapid pace of change in both technology and regulations? TikTok operates on a global scale, facing a complex web of varying regulations and user expectations. How do you balance the need for localized compliance with the desire for a consistent global security posture? How do you avoid creating a fragmented and overly complex system, and what role does automation play in this balancing act? What strategies and metrics do you use to ensure auditability and provide confidence to stakeholders? We understand you've used TikTok videos for security training. Can you elaborate on how you've fostered a strong security culture internally, especially in such a dynamic environment? What is in your TikTok feed? Resources: Kim on TikTok @securishe and TikTopTips EP214 Reconciling the Impossible: Engineering Cloud Systems for Diverging Regulations EP161 Cloud Compliance: A Lawyer - Turned Technologist! - Perspective on Navigating the Cloud EP14 Making Compliance Cloud-native…
C
Cloud Security Podcast by Google

1 EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI 27:15
27:15
Play Later
Play Later
Lists
Like
Liked27:15
Guest: Manija Poulatova , Director of Security Engineering and Operations at Lloyd's Banking Group Topics: SIEM migration is hard, and it can take ages. Yours was - given the scale and the industry - on a relatively short side of 9 months. What’s been your experience so far with that and what could have gone faster? Anton might be a “reformed” analyst but I can’t resist asking a three legged stool question: of the people/process/technology aspects, which are the hardest for this transformation? What helped the most in solving your big challenges? Was there a process that people wanted to keep but it needed to go for the new tool? One thing we talked about was the plan to adopt composite alerting techniques and what we’ve been calling the “funnel model” for detection in Google SecOps. Could you share what that means and how your team is adopting? There are a lot of moving parts in a D&R journey from a process and tooling perspective, how did you structure your plan and why? It wouldn’t be our show in 2025 if I didn’t ask at least one AI question! What lessons do you have for other security leaders preparing their teams for the AI in SOC transition? Resources: EP234 The SIEM Paradox: Logs, Lies, and Failing to Detect EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP231 Beyond the Buzzword: Practical Detection as Code in the Enterprise EP184 One Week SIEM Migration: Fact or Fiction? EP125 Will SIEM Ever Die: SIEM Lessons from the Past for the Future EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 “Maverick” — Scorched Earth SIEM Migration FTW! blog “Hack the box” site…
C
Cloud Security Podcast by Google

1 EP235 The Autonomous Frontier: Governing AI Agents from Code to Courtroom 34:06
34:06
Play Later
Play Later
Lists
Like
Liked34:06
Guest: Anna Gressel , Partner at Paul, Weiss , one of the AI practice leads Episode co-host: Marina Kaganovich , Office of the CISO, Google Cloud Questions: Agentic AI and AI agents, with its promise of autonomous decision-making and learning capabilities, presents a unique set of risks across various domains. What are some of the key areas of concern for you? What frameworks are most relevant to the deployment of agentic AI, and where are the potential gaps? What are you seeing in terms of how regulatory frameworks may need to be adapted to address the unique challenges posed by agentic AI? How about legal aspects - does traditional tort law or product liability apply? How does the autonomous nature of agentic AI challenge established legal concepts of liability and responsibility? The other related topic is knowing what agents “think” on the inside. So what are the key legal considerations for managing transparency and explainability in agentic AI decision-making? Resources: Paul, Weiss Waking Up With AI ( Apple , Spotify ) Cloud CISO Perspectives: How Google secures AI Agents Securing the Future of Agentic AI: Governance, Cybersecurity, and Privacy Considerations…
C
Cloud Security Podcast by Google

1 EP234 The SIEM Paradox: Logs, Lies, and Failing to Detect 37:59
37:59
Play Later
Play Later
Lists
Like
Liked37:59
Guest: Svetla Yankova , Founder and CEO, Citreno Topics: Why do so many organizations still collect logs yet don’t detect threats? In other words, why is our industry spending more money than ever on SIEM tooling and still not “winning” against Tier 1 ... or even Tier 5 adversaries? What are the hardest parts about getting the right context into a SOC analyst’s face when they’re triaging and investigating an alert? Is it integration? SOAR playbook development? Data enrichment? All of the above? What are the organizational problems that keep organizations from getting the full benefit of the security operations tools they’re buying? Top SIEM mistakes? Is it trying to migrate too fast? Is it accepting a too slow migration? In other words, where are expectations tyrannical for customers? Have they changed much since 2015? Do you expect people to write their own detections? Detecting engineering seems popular with elite clients and nobody else, what can we do? Do you think AI will change how we SOC (Tim: “SOC” is not a verb?) in the next 1- 3 -5 years? Do you think that AI SOC tech is repeating the mistakes SOAR vendors made 10 years ago? Are we making the same mistakes all over again? Are we making new mistakes? Resources: EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 EP231 Beyond the Buzzword: Practical Detection as Code in the Enterprise EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines EP202 Beyond Tiered SOCs: Detection as Code and the Rise of Response Engineering “RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check” blog Citreno, The Backstory “Parenting Teens With Love And Logic” book (as a management book) “Security Correlation Then and Now: A Sad Truth About SIEM” blog (the classic from 2019)…
C
Cloud Security Podcast by Google

1 EP233 Product Security Engineering at Google: Resilience and Security 25:44
25:44
Play Later
Play Later
Lists
Like
Liked25:44
Guest: Cristina Vintila , Product Security Engineering Manager, Google Cloud Topic: Could you share insights into how Product Security Engineering approaches at Google have evolved, particularly in response to emerging threats (like Log4j in 2021)? You mentioned applying SRE best practices in detection and response, and overall in securing the Google Cloud products. How does Google balance high reliability and operational excellence with the needs of detection and response (D&R)? How does Google decide which data sources and tools are most critical for effective D&R? How do we deal with high volumes of data? Resources: EP215 Threat Modeling at Google: From Basics to AI-powered Magic EP117 Can a Small Team Adopt an Engineering-Centric Approach to Cybersecurity? Podcast episodes on how Google does security EP17 Modern Threat Detection at Google EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil Google SRE book Google SRS book…
C
Cloud Security Podcast by Google

1 EP232 The Human Element of Privacy: Protecting High-Risk Targets and Designing Systems 31:37
31:37
Play Later
Play Later
Lists
Like
Liked31:37
Guest: Sarah Aoun , Privacy Engineer, Google Topic: You have had a fascinating career since we [Tim] graduated from college together – you mentioned before we met that you’ve consulted with a literal world leader on his personal digital security footprint. Maybe tell us how you got into this field of helping organizations treat sensitive information securely and how that led to helping keep targeted individuals secure? You also work as a privacy engineer on Fuschia , Google’s new operating system kernel. How did you go from human rights and privacy to that? What are the key privacy considerations when designing an operating system for “ambient computing”? How do you design privacy into something like that? More importantly, not only “how do you do it”, but how do you convince people that you did do it? When we talk about "higher risk" individuals, the definition can be broad. How can an average person or someone working in a seemingly less sensitive role better assess if they might be a higher-risk target? What are the subtle indicators? Thinking about the advice you give for personal security beyond passwords and multi-factor auth, how much of effective personal digital hygiene comes down to behavioral changes versus purely technical solutions? Given your deep understanding of both individual security needs and large-scale OS design, what's one thing you wish developers building cloud services or applications would fundamentally prioritize about user privacy? Resources: Google privacy controls Advanced protection program…
C
Cloud Security Podcast by Google

1 EP231 Beyond the Buzzword: Practical Detection as Code in the Enterprise 30:40
30:40
Play Later
Play Later
Lists
Like
Liked30:40
Guest: David French , Staff Adoption Engineer, Google Cloud Topic: Detection as code is one of those meme phrases I hear a lot, but I’m not sure everyone means the same thing when they say it. Could you tell us what you mean by it, and what upside it has for organizations in your model of it? What gets better for security teams and security outcomes when you start managing in a DAC world? What is primary, actual code or using SWE-style process for detection work? Not every SIEM has a good set of APIs for this, right? What’s a team to do in a world of no or low API support for this model? If we’re talking about as-code models, one of the important parts of regular software development is testing. How should teams think about testing their detection corpus? Where do we even start? Smoke tests? Unit tests? You talk about a rule schema–you might also think of it in code terms as a standard interface on the detection objects–how should organizations think about standardizing this, and why should they? If we’re into a world of detection rules as code and detections as code, can we also think about alert handling via code? This is like SOAR but with more of a software engineering approach, right? One more thing that stood out to me in your presentation was the call for sharing detection content. Is this between vendors, vendors and end users? Resources: Can We Have “Detection as Code”? Testing in Detection Engineering (Part 8) “So Good They Can't Ignore You: Why Skills Trump Passion in the Quest for Work You Love” book EP202 Beyond Tiered SOCs: Detection as Code and the Rise of Response Engineering EP181 Detection Engineering Deep Dive: From Career Paths to Scaling SOC Teams EP123 The Good, the Bad, and the Epic of Threat Detection at Scale with Panther Getting Started with Detection-as-Code and Google SecOps Detection Engineering Demystified: Building Custom Detections for GitHub Enterprise From soup to nuts: Building a Detection-as-Code pipeline David French - Medium Blog Detection Engineering Maturity Matrix…
C
Cloud Security Podcast by Google

1 EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google 26:11
26:11
Play Later
Play Later
Lists
Like
Liked26:11
Guest: Daniel Fabian , Principal Digital Arsonist, Google Topic: Your RSA talk highlights lessons learned from two years of AI red teaming at Google. Could you share one or two of the most surprising or counterintuitive findings you encountered during this process? What are some of the key differences or unique challenges you've observed when testing AI-powered applications compared to traditional software systems? Can you provide an example of a specific TTP that has proven effective against AI systems and discuss the implications for security teams looking to detect it? What practical advice would you give to organizations that are starting to incorporate AI red teaming into their security development lifecycle? What are some initial steps or resources you would recommend they explore to deepen their understanding of this evolving field? Resources: Video ( LinkedIn , YouTube ) Google's AI Red Team: the ethical hackers making AI safer EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons Lessons from AI Red Teaming – And How to Apply Them Proactively [RSA 2025]…
C
Cloud Security Podcast by Google

1 EP229 Beyond the Hype: Debunking Cloud Breach Myths (and What DBIR Says Now) 35:05
35:05
Play Later
Play Later
Lists
Like
Liked35:05
Guest: Alex Pinto , Associate Director of Threat Intelligence, Verizon Business, Lead the Verizon Data Breach Report Topics: How would you define “a cloud breach”? Is that a real (and different) thing? Are cloud breaches just a result of leaked keys and creds? If customers are responsible for 99% of cloud security problems, is cloud breach really about a customer being breached ? Are misconfigurations really responsible for so many cloud security breaches? How are we still failing at configuration? What parts of DBIR are not total “groundhog day” ? Something about vuln exploitation vs credential abuse in today’s breaches–what’s driving the shifts we’re seeing? DBIR Are we at peak ransomware? Will ransomware be here in 20 years? Will we be here in 20 years talking about it? How is AI changing the breach report, other than putting in hilarious footnotes about how the report is for humans to read and and is written by actual humans? Resources: Video ( LinkedIn , YouTube ) Verizon DBIR 2025 EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends EP205 Cybersecurity Forecast 2025: Beyond the Hype and into the Reality EP112 Threat Horizons - How Google Does Threat Intelligence EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025…
C
Cloud Security Podcast by Google

1 EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines 27:09
27:09
Play Later
Play Later
Lists
Like
Liked27:09
Guest Alan Braithwaite , Co-founder and CTO @ RunReveal Topics: SIEM is hard, and many vendors have discovered this over the years. You need to get storage, security and integration complexity just right. You also need to be better than incumbents. How would you approach this now? Decoupled SIEM vs SIEM/EDR/XDR combo. These point in the opposite directions, which side do you think will win? In a world where data volumes are exploding, especially in cloud environments, you're building a SIEM with ClickHouse as its backend, focusing on both parsed and raw logs. What's the core advantage of this approach, and how does it address the limitations of traditional SIEMs in handling scale? Cribl, Bindplane and “security pipeline vendors” are all the rage. Won’t it be logical to just include this into a modern SIEM? You're envisioning a 'Pipeline QL' that compiles to SQL , enabling 'detection in SQL.' This sounds like a significant shift, and perhaps not to the better? (Anton is horrified, for once) How does this approach affect detection engineering? With Sigma HQ support out-of-the-box, and the ability to convert SPL to Sigma, you're clearly aiming for interoperability. How crucial is this approach in your vision, and how do you see it benefiting the security community? What is SIEM in 2025 and beyond? What’s the endgame for security telemetry data? Is this truly SIEM 3.0, 4.0 or whatever-oh? Resources: EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP123 The Good, the Bad, and the Epic of Threat Detection at Scale with Panther EP190 Unraveling the Security Data Fabric: Need, Benefits, and Futures “20 Years of SIEM: Celebrating My Dubious Anniversary” blog “RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check” blog tl;dr security newsletter Introducing a RunReveal Model Context Protocol Server! MCP: Building Your SecOps AI Ecosystem AI Runbooks for Google SecOps: Security Operations with Model Context Protocol…
C
Cloud Security Podcast by Google

1 EP227 AI-Native MDR: Betting on the Future of Security Operations? 23:58
23:58
Play Later
Play Later
Lists
Like
Liked23:58
Guests: Eric Foster , CEO of Tenex.AI Venkata Koppaka , CTO of Tenex.AI Topics: Why is your AI-powered MDR special? Why start an MDR from scratch using AI? So why should users bet on an “AI-native” MDR instead of an MDR that has already got its act together and is now applying AI to an existing set of practices? What’s the current breakdown in labor between your human SOC analysts vs your AI SOC agents? How do you expect this to evolve and how will that change your unit economics? What tasks are humans uniquely good at today’s SOC? How do you expect that to change in the next 5 years? We hear concerns about SOC AI missing things –but we know humans miss things all the time too. So how do you manage buyer concerns about the AI agents missing things? Let’s talk about how you’re helping customers measure your efficacy overall. What metrics should organizations prioritize when evaluating MDR? Resources: Video EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 (quote from Eric in the title!) EP10 SIEM Modernization? Is That a Thing? Tenex.AI blog “RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check” blog The original ASO 10X SOC paper that started it all (2021) “Baby ASO: A Minimal Viable Transformation for Your SOC” blog “The Return of the Baby ASO: Why SOCs Still Suck?” blog " Learn Modern SOC and D&R Practices Using Autonomic Security Operations (ASO) Principles " blog…
C
Cloud Security Podcast by Google

1 EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams 24:39
24:39
Play Later
Play Later
Lists
Like
Liked24:39
Guest: Christine Sizemore , Cloud Security Architect, Google Cloud Topics: Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain? I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain? We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains? We’ve talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development? We are all hearing about agentic security – so can we just ask the AI to secure itself? Top 3 things to do to secure AI software supply chain for a typical org? Resources: Video “Securing AI Supply Chain: Like Software, Only Not” blog (and paper) “Securing the AI software supply chain” webcast EP210 Cloud Security Surprises: Real Stories, Real Lessons, Real "Oh No!" Moments Protect AI issue database “Staying on top of AI Developments” “Office of the CISO 2024 Year in Review: AI Trust and Security” “Your Roadmap to Secure AI: A Recap” (2024) " RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check" (references our "data as code" presentation)…
C
Cloud Security Podcast by Google

1 EP225 Cross-promotion: The Cyber-Savvy Boardroom Podcast: EP2 Christian Karam on the Use of AI 24:46
24:46
Play Later
Play Later
Lists
Like
Liked24:46
Hosts: David Homovich , Customer Advocacy Lead, Office of the CISO, Google Cloud Alicja Cade , Director, Office of the CISO, Google Cloud Guest: Christian Karam , Strategic Advisor and Investor Resources: EP2 Christian Karam on the Use of AI (as aired originally) The Cyber-Savvy Boardroom podcast site The Cyber-Savvy Boardroom podcast on Spotify The Cyber-Savvy Boardroom podcast on Apple Podcasts The Cyber-Savvy Boardroom podcast on YouTube Now hear this: A new podcast to help boards get cyber savvy (without the jargon) Board of Directors Insights Hub Guidance for Boards of Directors on How to Address AI Risk…
C
Cloud Security Podcast by Google

1 EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps 30:40
30:40
Play Later
Play Later
Lists
Like
Liked30:40
Guest: Diana Kelley , CSO at Protect AI Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks? Top differences between LLM/chatbot AI security vs AI agent security? Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper ) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes…
C
Cloud Security Podcast by Google

1 EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 31:37
31:37
Play Later
Play Later
Lists
Like
Liked31:37
Guests: no guests, just us in the studio Topics: At RSA 2025, did we see solid, measurably better outcomes from AI use in security, or mostly just "sizzle" and good ideas with potential? Are the promises of an "AI SOC" repeating the mistakes seen with SOAR in previous years regarding fully automated security operations? Does "AI SOC" work according to RSA floor? How realistic is the vision expressed by some [yes, really!] that AI progress could lead to technical teams, including IT and security, shrinking dramatically or even to zero in a few years? Why do companies continue to rely on decades-old or “non-leading” security technologies, and what role does the concept of a "organizational change budget" play in this inertia? Is being "AI Native" fundamentally better for security technologies compared to adding AI capabilities to existing platforms, or is the jury still out? Got "an AI-native SIEM"? Be ready to explain how is yours better! Resources: EP172 RSA 2024: Separating AI Signal from Noise, SecOps Evolves, XDR Declines? EP119 RSA 2023 - What We Saw, What We Learned, and What We're Excited About EP70 Special - RSA 2022 Reflections - Securing the Past vs Securing the Future RSA (“RSAI”) Conference 2024 Powered by AI with AI on Top — AI Edition (Hey AI, Is This Enough AI?) [Anton’s RSA 2024 recap blog] New Paper: “Future of the SOC: Evolution or Optimization — Choose Your Path” (Paper 4 of 4.5) [talks about the change budget discussed]…
C
Cloud Security Podcast by Google

1 EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends 35:19
35:19
Play Later
Play Later
Lists
Like
Liked35:19
Guests: Kirstie Failey @ Google Threat Intelligence Group Scott Runnels @ Mandiant Incident Response Topics: What is the hardest thing about turning distinct incident reports into a fun to read and useful report like M-Trends ? How much are the lessons and recommendations skewed by the fact that they are all “post-IR” stories? Are “IR-derived” security lessons the best way to improve security? Isn’t this a bit like learning how to build safely from fires vs learning safety engineering? The report implies that F500 companies suffer from certain security issues despite their resources, does this automatically mean that smaller companies suffer from the same but more? "Dwell time" metrics sound obvious, but is there magic behind how this is done? Sometimes “dwell tie going down” is not automatically the defender’s win, right? What is the expected minimum dwell time? If “it depends”, then what does it depend on? Impactful outliers vs general trends (“by the numbers”), what teaches us more about security? Why do we seem to repeat the mistakes so much in security? Do we think it is useful to give the same advice repeatedly if the data implies that it is correct advice but people clearly do not do it? Resources: M-Trends 2025 report Mandiant Attack Lifecycle EP205 Cybersecurity Forecast 2025: Beyond the Hype and into the Reality EP147 Special: 2024 Security Forecast Report…
C
Cloud Security Podcast by Google

1 EP221 Special - Semi-Live from Google Cloud Next 2025: AI, Agents, Security ... Cloud? 30:26
30:26
Play Later
Play Later
Lists
Like
Liked30:26
Guests: No guests [Tim in Vegas and Anton remote] Topics: So, another Next is done. Beyond the usual Vegas chaos, what was the overarching security theme or vibe you [Tim] felt dominated the conference this year? Thinking back to Next '24, what felt genuinely different this year versus just the next iteration of last year's trends? Last year, we pondered the 'Cloud Island' vs. 'Cloud Peninsula'. Based on Next 2025, is cloud security becoming more integrated with general cyber security, or is it still its own distinct domain? What wider trends did you observe, perhaps from the expo floor buzz or partner announcements, that security folks should be aware of? What was the biggest surprise for you at Next 2025? Something you absolutely didn't see coming? Putting on your prediction hats (however reluctantly): based on Next 2025, what do you foresee as the major cloud security focus or challenge for the industry in the next 12 months? If a busy podcast listener listening could only take one key message or action item away from everything announced and discussed at Next 2025, what should it be? Resources: EP169 Google Cloud Next 2024 Recap: Is Cloud an Island, So Much AI, Bots in SecOps…
C
Cloud Security Podcast by Google

1 EP220 Big Rewards for Cloud Security: Exploring the Google VRP 29:13
29:13
Play Later
Play Later
Lists
Like
Liked29:13
Guests: Michael Cote , Cloud VRP Lead, Google Cloud Aadarsh Karumathil , Security Engineer, Google Cloud Topics: Vulnerability response at cloud-scale sounds very hard! How do you triage vulnerability reports and make sure we’re addressing the right ones in the underlying cloud infrastructure? How do you determine how much to pay for each vulnerability? What is the largest reward we paid? What was it for? What products get the most submissions? Is this driven by the actual product security or by trends and fashions like AI? What are the most likely rejection reasons? What makes for a very good - and exceptional? - vulnerability report? We hear we pay more for “exceptional” reports, what does it mean? In college Tim had a roommate who would take us out drinking on his Google web app vulnerability rewards. Do we have something similar for people reporting vulnerabilities in our cloud infrastructure? Are people making real money off this? How do we actually uniquely identify vulnerabilities in the cloud? CVE does not work well, right? What are the expected risk reduction benefits from Cloud VRP? Resources: Cloud VRP site Cloud VPR launch blog CVR: The Mines of Kakadûm…
C
Cloud Security Podcast by Google

1 EP219 Beyond the Buzzwords: Decoding Cyber Risk and Threat Actors in Asia Pacific 31:46
31:46
Play Later
Play Later
Lists
Like
Liked31:46
Guest: Steve Ledzian , APAC CTO, Mandiant at Google Cloud Topics: We've seen a shift in how boards engage with cybersecurity. From your perspective, what's the most significant misconception boards still hold about cyber risk, particularly in the Asia Pacific region, and how has that impacted their decision-making? Cybersecurity is rife with jargon. If you could eliminate or redefine one overused term, which would it be and why? How does this overloaded language specifically hinder effective communication and action in the region? The Mandiant Attack Lifecycle is a well-known model. How has your experience in the East Asia region challenged or refined this model? Are there unique attack patterns or actor behaviors that necessitate adjustments? Two years post-acquisition, what's been the most surprising or unexpected benefit of the Google-Mandiant combination? M-Trends data provides valuable insights, particularly regarding dwell time. Considering the Asia Pacific region, what are the most significant factors reducing dwell time, and how do these trends differ from global averages? Given your expertise in Asia Pacific, can you share an observation about a threat actor's behavior that is often overlooked in broader cybersecurity discussions? Looking ahead, what's the single biggest cybersecurity challenge you foresee for organizations in the Asia Pacific region over the next five years, and what proactive steps should they be taking now to prepare? Resources: EP177 Cloud Incident Confessions: Top 5 Mistakes Leading to Breaches from Mandiant EP156 Living Off the Land and Attacking Critical Infrastructure: Mandiant Incident Deep Dive EP191 Why Aren't More Defenders Winning? Defender’s Advantage and How to Gain it!…
C
Cloud Security Podcast by Google

1 EP218 IAM in the Cloud & AI Era: Navigating Evolution, Challenges, and the Rise of ITDR/ISPM 30:10
30:10
Play Later
Play Later
Lists
Like
Liked30:10
Guest: Henrique Teixeira , Senior VP of Strategy, Saviynt, ex-Gartner analyst Topics: How have you seen IAM evolve over the years, especially with the shift to the cloud, and now AI? What are some of the biggest challenges and opportunities these two shifts present? ITDR (Identity Threat Detection and Response) and ISPM (Identity Security Posture Management) are emerging areas in IAM. How do you see these fitting into the overall IAM landscape? Are they truly distinct categories or just extensions of existing IAM practices? Shouldn’t ITDR just be part of your Cloud DR or maybe even your SecOps tool of choice? It seems goofy to try to stand ITDR on its own when the impact of an identity compromise is entirely a function of what that identity can access or do, no? Regarding workload vs. human identity, could you elaborate on the unique security considerations for each? How does the rise of machine identities and APIs impact IAM approaches? We had a whole episode around machine identity that involved turtles–what have you seen in the machine identity space and how have you seen users mess it up? The cybersecurity world is full of acronyms. Any tips on how to create a memorable and impactful acronym? Resources: EP166 Workload Identity, Zero Trust and SPIFFE (Also Turtles!) EP182 ITDR: The Missing Piece in Your Security Puzzle or Yet Another Tool to Buy? EP127 Is IAM Really Fun and How to Stay Ahead of the Curve in Cloud IAM? EP94 Meet Cloud Security Acronyms with Anna Belak EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler EP199 Your Cloud IAM Top Pet Peeves (and How to Fix Them) EP188 Beyond the Buzzwords: Identity's True Role in Cloud and SaaS Security “Playing to Win: How Strategy Really Works” book “Open” book…
C
Cloud Security Podcast by Google

1 EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? 23:11
23:11
Play Later
Play Later
Lists
Like
Liked23:11
Guest: Alex Polyakov , CEO at Adversa AI Topics: Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client? Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now? What trips most clients, classic security mistakes in AI systems or AI-specific mistakes? Are there truly new mistakes in AI systems or are they old mistakes in new clothing? I know it is not your job to fix it, but much of this is unfixable, right? Is it a good idea to use AI to secure AI? Resources: EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far AI Red Teaming Reasoning LLM US vs China: Jailbreak Deepseek, Qwen, O1, O3, Claude, Kimi Adversa AI blog Oops! 5 serious gen AI security mistakes to avoid Generative AI Fast Followership: Avoid These First Adopter Security Missteps…
C
Cloud Security Podcast by Google

1 EP216 Ephemeral Clouds, Lasting Security: CIRA, CDR, and the Future of Cloud Investigations 31:43
31:43
Play Later
Play Later
Lists
Like
Liked31:43
Guest: James Campbell , CEO, Cado Security Chris Doman , CTO, Cado Security Topics: Cloud Detection and Response (CDR) vs Cloud Investigation and Response Automation( CIRA ) ... what’s the story here? There is an “R” in CDR, right? Can’t my (modern) SIEM/SOAR do that? What about this becoming a part of modern SIEM/SOAR in the future? What gets better when you deploy a CIRA (a) and your CIRA in particular (b)? Ephemerality and security, what are the fun overlaps? Does “E” help “S” or hurts it? What about compliance? Ephemeral compliance sounds iffy… Cloud investigations, what is special about them? How does CSPM intersect with this? Is CIRA part of CNAPP? A secret question, need to listen for it! Resources: EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud EP67 Cyber Defense Matrix and Does Cloud Security Have to DIE to Win? EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics Cloud security incidents (Rami McCarthy) Cado resources…
C
Cloud Security Podcast by Google

1 EP215 Threat Modeling at Google: From Basics to AI-powered Magic 26:03
26:03
Play Later
Play Later
Lists
Like
Liked26:03
Guest: Meador Inge , Security Engineer, Google Cloud Topics: Can you walk us through Google's typical threat modeling process? What are the key steps involved? Threat modeling can be applied to various areas. Where does Google utilize it the most? How do we apply this to huge and complex systems? How does Google keep its threat models updated? What triggers a reassessment? How does Google operationalize threat modeling information to prioritize security work and resource allocation? How does it influence your security posture? What are the biggest challenges Google faces in scaling and improving its threat modeling practices? Any stories where we got this wrong? How can LLMs like Gemini improve Google's threat modeling activities? Can you share examples of basic and more sophisticated techniques? What advice would you give to organizations just starting with threat modeling? Resources: EP12 Threat Models and Cloud Security EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw EP200 Zero Touch Prod, Security Rings, and Foundational Services: How Google Does Workload Security EP140 System Hardening at Google Scale: New Challenges, New Solutions Threat Modeling manifesto EP176 Google on Google Cloud: How Google Secures Its Own Cloud Use Awesome Threat Modeling Adam Shostack “Threat Modeling: Designing for Security” book Ross Anderson “Security Engineering” book ”How to Solve It” book…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.