Artwork

Content provided by Jodi and Justin Daniels and Justin Daniels. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jodi and Justin Daniels and Justin Daniels or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
icon Daily Deals

Navigating Digital Entropy: Insights from IAPP’s Organizational Digital Governance Report

29:05
 
Share
 

Manage episode 444464552 series 2806859
Content provided by Jodi and Justin Daniels and Justin Daniels. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jodi and Justin Daniels and Justin Daniels or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Joe Jones serves as the Director of Research and Insights at the IAPP. Previously, he served as the UK Government’s Deputy Head of Digital Trade, where he was responsible for digital policy. Joe also served as a private practice lawyer on international data issues.

In this episode…

Companies are grappling with the challenges of managing privacy, security, AI, and data governance in an increasingly complex regulatory environment. The IAPP’s Organizational Digital Governance Report highlights the challenges businesses face due to “digital entropy” — caused by overlapping laws, rapid technological shifts, and cultural and socio-technical differences, emphasizing the need for organizations to align their governance structures to address these challenges. How can companies navigate these complexities while maintaining compliance and operational efficiency?

The IAPP’s digital governance report provides insights into how companies can adapt their structures and processes to meet the growing demands of digital governance. It outlines three varying approaches companies are using to navigate digital entropy: the analog model, where companies use their current structures while adding more tasks to existing teams; the augmented model, where companies create new committees or cross-functional teams to define overarching terms for digital governance and policy; and the aligned model, where companies have dedicated roles for digital governance. The report underscores the importance of moving toward a more aligned model, where privacy, security, and AI governance are streamlined under cohesive leadership. This involves empowering privacy teams, implementing regular audits, fostering collaboration across departments, and avoiding reliance on ad hoc committees to align with evolving privacy regulations.

In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Joe Jones, Director of Research and Insights at IAPP, about how companies can leverage insights from the IAPP Organizational Digital Governance Report to improve their digital governance frameworks. Joe explains how companies can stay ahead of regulatory changes by embracing more structured governance models. He also emphasizes the need for privacy professionals to act as enablers within organizations, offering guidance on leveraging data responsibly while navigating the growing complexity of privacy regulations.

  continue reading

223 episodes

Artwork
iconShare
 
Manage episode 444464552 series 2806859
Content provided by Jodi and Justin Daniels and Justin Daniels. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jodi and Justin Daniels and Justin Daniels or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Joe Jones serves as the Director of Research and Insights at the IAPP. Previously, he served as the UK Government’s Deputy Head of Digital Trade, where he was responsible for digital policy. Joe also served as a private practice lawyer on international data issues.

In this episode…

Companies are grappling with the challenges of managing privacy, security, AI, and data governance in an increasingly complex regulatory environment. The IAPP’s Organizational Digital Governance Report highlights the challenges businesses face due to “digital entropy” — caused by overlapping laws, rapid technological shifts, and cultural and socio-technical differences, emphasizing the need for organizations to align their governance structures to address these challenges. How can companies navigate these complexities while maintaining compliance and operational efficiency?

The IAPP’s digital governance report provides insights into how companies can adapt their structures and processes to meet the growing demands of digital governance. It outlines three varying approaches companies are using to navigate digital entropy: the analog model, where companies use their current structures while adding more tasks to existing teams; the augmented model, where companies create new committees or cross-functional teams to define overarching terms for digital governance and policy; and the aligned model, where companies have dedicated roles for digital governance. The report underscores the importance of moving toward a more aligned model, where privacy, security, and AI governance are streamlined under cohesive leadership. This involves empowering privacy teams, implementing regular audits, fostering collaboration across departments, and avoiding reliance on ad hoc committees to align with evolving privacy regulations.

In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Joe Jones, Director of Research and Insights at IAPP, about how companies can leverage insights from the IAPP Organizational Digital Governance Report to improve their digital governance frameworks. Joe explains how companies can stay ahead of regulatory changes by embracing more structured governance models. He also emphasizes the need for privacy professionals to act as enablers within organizations, offering guidance on leveraging data responsibly while navigating the growing complexity of privacy regulations.

  continue reading

223 episodes

All episodes

×
 
Brett Ewing is the Founder and CEO of AXE.AI, a cutting-edge cybersecurity SaaS start-up, and the Chief Information Security Officer at 3DCloud. He has built a career in offensive cybersecurity, focusing on driving exponential improvement. Brett progressed from a Junior Penetration Tester to Chief Operating Officer at Strong Crypto, a provider of cybersecurity solutions. He brings over 15 years of experience in information technology, with the past six years focused on penetration testing, incident response, advanced persistent threat simulation, and business development. He holds degrees in secure systems administration and cybersecurity, and is currently completing a Masters in cybersecurity with a focus area in AI/ML security at the SANS Technology Institute. Brett also holds more than a dozen certifications in IT, coding, and security from the SANS Institute, CompTIA, AWS, and other industry vendors. In this episode… Penetration testing plays a vital role in cybersecurity, but the traditional manual process is often slow and resource-heavy. Traditional testing cycles can take weeks, creating gaps that leave organizations vulnerable to fast-moving threats. With growing interest in more efficient approaches, organizations are exploring new AI tools to automate tasks like tool configuration, project management, and data analysis. How can cybersecurity teams use AI to test environments faster without increasing risk? AXE.AI offers an AI-powered platform that supports ethical hackers and red teamers by automating key components of the penetration testing process. The platform reduces overhead by configuring tools, analyzing output, and building task lists during live engagements. This allows teams to complete high-quality tests in days instead of weeks. AXE.AI’s approach supports complex environments, improves data visibility for testers, and scales efficiently across enterprise networks. The company emphasizes a human-centered approach and advocates for workforce education and training as a foundation for secure AI adoption. In today’s episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Brett Ewing, Founder and CEO of AXE.AI, about leveraging AI for offensive cybersecurity. Brett explains how AXE.AI’s platform enhances penetration testing and improves speed and coverage for large-scale networks. He also shares how AI is changing both attack and defense strategies, highlighting the risks posed by large language models (LLMs) and deepfakes, and explains why investing in continuous workforce training remains the most important cyber defense for companies today.…
 
James Patto is the Partner of Helios Salinger. He is a leading voice in Australia’s tech law landscape, trusted by business and government on privacy, cybersecurity, and AI issues. With over a decade of experience as a digital lawyer, he helps organizations turn regulation into opportunity — bridging law, innovation, and strategy to build trust and thrive in a digital world. In this episode… Australian privacy law stands at a critical juncture as organizations potentially face the country's most significant regulatory transformation yet. While the current principles-based Australian Privacy Act has been the foundation for over a decade, it contains notable gaps, like limited individual rights and broad exemptions for small businesses, employee data, and political parties. 84% of Australians want more control over how their personal information is collected and used, and with recent enforcement changes introducing civil penalties and on-the-spot fines, regulators now have stronger tools to hold organizations accountable. As lawmakers consider the next phase of reforms, how can businesses prepare for new compliance requirements while navigating an uncertain implementation timeline? Businesses can adapt to evolving privacy regulations and position themselves for success by strengthening their current privacy practices, including focusing on privacy notice quality, direct marketing opt-out procedures, and data breach response notice accuracy. Conducting a privacy maturity assessment and implementing streamlined, risk-based privacy impact assessments can help identify gaps and prepare for new compliance obligations. It’s also critical for organizations to understand the data they collect, where it resides, how it’s used, shared, or sold by building a comprehensive data inventory. In this episode of She Said Privacy/He Said Security , Jodi and Justin Daniels talk with James Patto, Partner at Helios Salinger, about the current state and future of Australia’s privacy law. James discusses the major shifts in Australia’s privacy landscape and the broader implications for businesses. He shares how Australia’s strong small business sector influences privacy policymaking and how the Privacy Review Report's 89 proposals might reshape Australia's regulatory framework. James also explores the differences between Australia’s privacy law and the GDPR, the timeline for proposed reforms, and what companies should do now to prepare.…
 
Anita Gorney is the Head of Privacy and AI Legal at Harvey. Harvey is an AI tool for legal professionals and professional service providers. Before Harvey, she was Privacy Counsel at Stripe. Anita studied law in Sydney and began her career there before moving to London and then New York. In this episode… Legal professionals often spend time on manual tasks that are repetitive and difficult to scale. Emerging AI platforms, like Harvey AI, are addressing this challenge by offering tools that help lawyers handle tasks such as legal research and contract review more efficiently. As legal professionals adopt AI to streamline their work, they are placing greater focus on data confidentiality and the secure handling of client information. Harvey AI addresses these concerns through its strict privacy and security controls, customer-controlled retention and deletion settings, and a commitment to not train on customer data. Harvey AI provides a purpose-built platform tailored for legal professionals. The company’s suite of tools — Assistant, Vault, and Workflow — automates repetitive legal work like summarizing documents, performing contract reviews, and managing due diligence processes. Harvey AI emphasizes privacy and security through features like zero data retention, encrypted processing, and workspace isolation, ensuring customer data remains confidential and is never used for model training. With a transparent, customer-first approach, Harvey AI empowers legal teams to work more efficiently without compromising trust or user data. In this episode of the She Said Privacy/He Said Security Podcast, Jodi and Justin Daniels speak with Anita Gorney, Head of Privacy and AI Legal at Harvey AI, about how legal professionals use specialized artificial intelligence to streamline their work. Anita explains how Harvey AI's platform helps with tasks like contract analysis and due diligence, while addressing privacy and security concerns through measures like customizable data retention periods and workspace isolation. She also discusses the importance of privacy by design in AI tools, conducting privacy impact assessments, and implementing user-controlled privacy settings.…
 
Ben Halpert is a cybersecurity leader, educator, and advocate dedicated to empowering digital citizens. As a Fractional CISO, author, and the founder of Savvy Cyber Kids, he advances cyber safety and ethics. A sought-after speaker, Ben shares insights globally, shaping secure digital futures at work, school, and home. In this episode… Many parents mistakenly believe that technology companies have built-in safety controls that keep children safe online. In reality, these protections are often inadequate and misleading. From AI chatbots posing as friends to online predators targeting children through gaming platforms and social media, young users, whose brains are still developing, struggle to distinguish the differences between real human interactions and programmed responses. How can parents and caregivers proactively safeguard their children’s digital experiences while fostering healthy tech habits? Addressing these risks starts with parental oversight and consistent, age-appropriate education and guidance. Devices should be removed from kids’ bedrooms at night to prevent unsupervised use and reduce exposure to online threats. Parents should actively monitor every app, game, and online interaction, ensuring children only engage with people they know in real life. Families should also establish device-free times, like during meals, to encourage face-to-face communication and teach healthy social habits. Savvy Cyber Kids supports these efforts by providing age-appropriate educational resources, including children’s picture books, classroom activities, and digital parenting guides that help families navigate online safety. By focusing on direct education for young children and providing tools for parents and schools, the organization instills foundational privacy and cybersecurity awareness from an early age. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels welcome Ben Halpert, Founder of Savvy Cyber Kids, back to the podcast to discuss the growing digital threats facing kids’ today. Ben explains how AI chatbots are being treated as real friends, how social media messaging misleads parents, and why depending on tech companies for protection is risky. He also shares how predators use games and platforms to target kids, and how parental involvement and early education can help build safer digital habits. He shares steps parents can take to monitor and guide their children’s tech use at home, and explains how Savvy Cyber Kids helps by educating young children in schools and providing families with tools to teach online safety.…
 
Todd Renner is a seasoned cybersecurity professional with over 25 years of experience leading global cyber investigations, incident response efforts, and digital asset recovery operations. He advises clients on a wide range of cybersecurity and data privacy matters, combining deep technical knowledge with a strategic understanding of risk, compliance, and regulatory frameworks. With a distinguished background at the Federal Bureau of Investigation (FBI) and National Security Agency (NSA), Mr. Renner has contributed to national security, international cyber collaboration, and has played a key role in mentoring the next generation of cybersecurity professionals. In this episode… The rising complexity of cyber threats continues to test how businesses prepare, respond, and recover. Sophisticated threat actors are exploiting these vulnerabilities of private companies and leveraging AI tools to accelerate their attacks. Despite these dangers, many organizations hesitate to involve law enforcement when a cyber event occurs. This hesitation often stems from misconceptions about what law enforcement involvement entails, including fears of losing control over their systems or exposing sensitive company information. As a result, companies may prioritize quickly restoring operations over pursuing retribution from the attackers, leaving critical security gaps unaddressed. Collaborating with law enforcement doesn’t mean forfeiting control or exposing confidential data unnecessarily. Investigations often reveal repeated issues, including mobile device compromises, missing multifactor authentication, and failing to improve cybersecurity measures after a breach. To be better prepared, companies need to develop and practice incident response plans, ensure leadership remains involved, and build security programs that evolve beyond incident response. And, as threat actors actively use AI to accelerate data aggregation and create convincing deepfakes, companies need to start thinking about how to better detect these threats. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Todd Renner, Senior Managing Director at FTI Consulting, about how organizations are responding to modern cyber threats and where many still fall short. Todd shares why companies hesitate to engage law enforcement, how threat actors are using AI for faster targeting and impersonation, and why many businesses fail to strengthen their cybersecurity programs after a breach. He also discusses why deepfakes are eroding trust and raising new challenges for companies, and he provides practical tips for keeping both organizations and families safe from evolving threats.…
 
Jodi Daniels is the Founder and CEO of Red Clover Advisors, a privacy consultancy, that integrates data privacy strategy and compliance into a flexible, scalable approach that simplifies complex privacy challenges. A Certified Information Privacy Professional, Jodi brings over 27 years of experience in privacy, marketing, strategy, and finance across diverse sectors, working and supporting startups to Fortune 500 companies. Jodi Daniels is a national keynote speaker, and she has also been featured in CNBC, The Economist, WSJ, Forbes, Inc., and many more publications. Jodi holds a MBA and BBA from Emory University’s Goizueta Business School. Read her full bio . Justin Daniels is a corporate attorney who advises domestic and international companies on business growth, M&A, and technology transactions, with over $2 billion in closed deals. He helps clients navigate complex issues involving data privacy, cybersecurity, and emerging technologies like AI, autonomous vehicles, blockchain, and fintech. Justin partners with C-suites and boards to manage cybersecurity as a strategic enterprise risk and leads breach response efforts across industries such as healthcare, logistics, and manufacturing. A frequent keynote speaker and media contributor, Justin has presented at top events including the RSA Conference, covering topics like cybersecurity in M&A, AI risk, and the intersection of privacy and innovation. Together, Jodi and Justin host the top ranked She Said Privacy / He Said Security Podcast and are authors of WSJ best-selling book, Data Reimagined: Building Trust One Byte at a Time. In this episode… From a major privacy summit to a regional AI event, experts across sectors are emphasizing that regulatory scrutiny is intensifying while AI capabilities and risks are accelerating. State privacy regulators are coordinating enforcement efforts, actively monitoring how companies handle privacy rights requests and whether cookie consent platforms work as they should. At the same time, AI tools are advancing rapidly with limited regulatory oversight, raising serious ethical and societal concerns. What practical lessons can businesses take from IAPP’s 2025 Global Privacy Summit and Atlanta’s AI Week to strengthen compliance, reduce risk, and prepare for what’s ahead? At the 2025 IAPP Global Privacy Summit, a major theme emerged: state privacy regulators are collaborating on enforcement more closely than ever before. When it comes to honoring privacy rights, this collaboration spans early inquiry stages through active enforcement, making it critical for businesses to establish, regularly test, and monitor their privacy rights processes. It also means that companies need to audit cookie consent platforms regularly, ensure compliance with universal opt-out signals like the Global Privacy Control, and align privacy notices with actual practices. Regulatory enforcement advisories and FAQs should be treated as essential readings to stay current on regulators' priorities. Likewise at the inaugural Atlanta AI Week, national security and ethical concerns came into sharper focus. Despite promises of localized data storage, some social media platforms and apps continue to raise alarms over foreign governments’ potential access to personal data. While experts encourage experimentation and practical application of AI tools, they are also urging businesses to remain vigilant to threats such as deepfakes, AI-driven misinformation, and the broader societal implications of unchecked AI development. In this episode of She Said Privacy/He Said Security , Jodi Daniels, Founder and CEO of Red Clover Advisors, and Justin Daniels, Shareholder and Corporate Attorney at Baker Donelson, share their top takeaways from the IAPP Global Privacy Summit 2025 and the inaugural Atlanta AI Week. Jodi highlights practical steps for improving privacy rights request handling, the importance of regularly testing cookie consent management platforms, and ensuring published privacy notices reflect actual practices. Justin discusses the ethical challenges surrounding AI's rapid growth, the national security risks tied to social media platforms, and the dangers posed by deepfake technology. Together, Jodi and Justin emphasize the importance of continuous education, collaboration, and proactive action to prepare businesses for the future of privacy and AI.…
 
Peter Kosmala is a course developer and instructor at York University in Canada and leads its Information Privacy Program. Peter is a former marketer, technologist, lobbyist, and association leader and a current consultant, educator, and international speaker. He served the IAPP as Vice President and led the launch of the CIPP certification in the early 2000s. In this episode… As data privacy continues to evolve, privacy professionals need to stay sharp by reinforcing their foundational knowledge and refining their practical skills. It’s no longer enough to just understand and comply with regulatory requirements. Today’s privacy work also demands cultural awareness, ethical judgment, and the ability to apply privacy principles to real-world settings. How can privacy professionals expand their expertise and remain effective in an ever-changing environment? Privacy professionals can’t rely on legal knowledge alone to stay ahead. Privacy frameworks like the Fair Information Practice Principles (FIPPs), OECD Guidelines, and others offer principles that help privacy pros navigate shifting global privacy laws and emerging technologies. Privacy pros should also deepen their cultural literacy, recognizing the societal and political drivers behind laws like GDPR to align privacy practices with public expectations. Hands-on operational experience is just as important. Conducting privacy impact assessments (PIAs), responding to data subject access requests (DSARs), and developing clear communications are just a few ways privacy pros can turn knowledge into practical applications. In this episode of She Said Privacy/He Said Security , Jodi and Justin Daniels talk with Peter Kosmala, Course Developer and Instructor at York University, about how privacy professionals can future-proof their skills. Peter discusses the value of foundational privacy frameworks, the tension between personalization and privacy, the limits of law-based compliance, and the growing need for ethical data use. He also explains the importance of privacy certifications, hands-on learning, and principled thinking to build programs that work in the real world.…
 
Amanda Moore is a seasoned leader with extensive experience in privacy strategy, technology, and operations. She currently serves as the Senior Director of Privacy at DIRECTV, where she oversees the company’s privacy program with respect to technology and operations. Prior to her role at DIRECTV, she held pivotal positions at CVS Health and AT&T leading technical and business teams. Her career started in information technology but shifted to privacy before the onset of CCPA. Amanda holds the CIPM certifications and is a OneTrust Fellow of Privacy Technology. In this episode… Many organizations invest in privacy technology expecting it to deliver instant compliance, only to find that it fails to integrate with existing tools or processes. Adoption often lags when internal teams see privacy as a barrier or when tools are implemented without clearly defined goals. Choosing privacy technology before businesses understand the specific problem they’re meant to solve leads to confusion, inefficiency, and low adoption. One of the most effective ways to boost technology adoption is to start with a clear understanding of business processes and goals before introducing new privacy tech. Successful privacy programs start by mapping business processes and making small, non-disruptive backend adjustments that minimize disruption. Additionally, building internal awareness through roadshows, clear communication, and simplified privacy impact assessments helps shift perceptions and encourages teams to view privacy as a business enabler. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Amanda Moore, Senior Director of Privacy at DIRECTV, about integrating privacy technology into business operations. Amanda highlights how strong internal relationships help position privacy as a business enabler, why reframing communication to various business executives enhances support for privacy initiatives, and how measuring privacy program maturity with the use of technology provides more insight than surface-level metrics. She also discusses methods to increase adoption through internal awareness campaigns and simplified assessments, and the long-term value of reputation-building within organizations.…
 
Brian Mullin is the CEO and Co-founder of Karlsgate. He is also a creator of Karlsgate Identity Exchange, a groundbreaking solution for zero-trust remote data matching and integration. Brian has over 30 years of experience in data privacy and security with leadership roles at companies across the data-driven marketing ecosystem. In this episode… Data is often viewed as binary and categorized as either public or private with the assumption that private data is secure and tightly protected. Companies often rely on firewalls, contracts, and policies to secure data, yet these measures don’t guarantee control once data is shared across multiple platforms and with third-party vendors. Every time data changes hands, the risk of exposure, misuse, or compliance failure increases. So, how can organizations securely share data while minimizing risks and protecting individual identities? To address this challenge, companies can treat sensitive information as a “protected data” category where data is only shared under specific, controlled, and technology-enforced conditions. Rather than trusting third-party data clean rooms to match and analyze data sets, businesses can use Karlsgate’s peer-to-peer privacy-enhancing technology to prevent identity exposure altogether. This allows companies to reduce risk while eliminating the need for persistent IDs like cookies to ensure data set matching occurs without revealing personal information. In this episode of the She Said Privacy/He Said Security Podcast, Jodi and Justin Daniels talk with Brian Mullin, CEO and Co-founder of Karlsgate, about how companies can rethink data sharing with privacy-first tools. Brian discusses the dangers of persistent identifiers and why protected pipelines offer a more scalable and secure solution than traditional data clean rooms. Brian also shares how Karlsgate enables secure data set matching between organizations while eliminating the need to hand over control and explains how organizations can adopt these technologies quickly without adding friction to existing workflows.…
 
Farah Gasmi is the Co-founder and CPO of Dioptra, the accurate and customizable AI agent that drafts playbooks and consistently redlines contracts in Microsoft Word. Dioptra is trusted by some of the most innovative teams, like Y Combinator and Wilson Sonsini. She has over 10 years of experience building AI products in healthcare, insurance, and tech for companies like Spotify. Farah is also an adjunct professor at Columbia Business School in NYC. She teaches a Product Management course with a focus on AI and data products. Laurie Ehrlich is the Chief Legal Officer at Dioptra, a cutting-edge legal tech startup revolutionizing contract redlining and playbook generation with AI. With a background leading legal operations and commercial contracting at Datadog and Cognizant, Laurie has deep expertise in scaling legal functions to drive business impact. She began her career in intellectual property law at top firms and holds a JD from NYU School of Law and a BS from Cornell. Passionate about innovation and diversity in tech, Laurie has also been a champion for women in leadership throughout her career. In this episode… Contract review can be time-consuming and complex, especially when working with third-party agreements that use unfamiliar language and formats. Legal teams often rely on manual review processes that make it challenging to maintain consistency across contracts, contributing to inefficiencies and increased costs. That’s why businesses need an effective solution that reduces the burden of contract analysis while supporting legal and strategic decision-making. Dioptra, a legal tech startup, helps solve these challenges by leveraging AI to automate first-pass contract reviews, redline contracts, and generate playbooks. The AI agent analyzes past agreements to identify patterns, standard language, and key risk areas, allowing teams to streamline the review process. It supports a range of use cases — from NDAs to real estate deals — while improving consistency and reducing review time. Dioptra also enhances post-execution analysis by enabling companies to assess past agreements for compliance and risk exposure. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Farah Gasmi, Co-founder and Chief Product Officer at Dioptra, and Laurie Ehrlich, the Chief Legal Officer at Dioptra, about how AI is used to streamline contract reviews. Together, they discuss how Dioptra accelerates contract reviews, supports security and privacy through strict data controls, and enables organizations to build smarter, more consistent contract processes — without removing the need for expert human judgment. Farah and Laurie also delve into the importance of AI-driven consistency in contract negotiation, vendor security evaluations, and how companies can safeguard sensitive data when using AI tools.…
 
David Kennedy is the Founder and CEO of TrustedSec and Co-founder at Binary Defense. He is considered an industry leader in cybersecurity. As the former Chief Security Officer of Diebold, David has led global cybersecurity teams, testified before Congress, and continues to shape cybersecurity policy. He co-authored the Penetration Testing Execution Standard and is renowned in offensive security. A Marine with intelligence experience, he prioritizes family, fitness, and co-hosts the Hacking Your Health Podcast. He built a DeLorean time machine inspired by Back to the Future. David's life mission is to help others and to make the world a safer place in cybersecurity, which drives him every single day. In this episode… Cybersecurity threats are evolving at an alarming rate, and businesses face an uphill battle in protecting their data and systems. Ransomware attacks, supply chain vulnerabilities, and sophisticated social engineering tactics put organizations at constant risk. At the same time, companies face mounting pressure to protect customer data amid the growing influence of AI-driven misinformation, concerns surrounding platforms like TikTok, and other evolving cyber threats. How can businesses defend themselves proactively? Building a strong cybersecurity program requires leadership, governance, and proactive risk management, not just technology. Many organizations struggle with detecting breaches in real time, making rapid threat detection and response essential. TrustedSec and Binary Defense are helping companies address these challenges by providing expert-led security consulting, penetration testing, and real-time threat monitoring. As cyber threats become more advanced, collaboration between security and privacy teams is essential to building a comprehensive defense strategy. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with David Kennedy, Founder and CEO of TrustedSec and Co-founder at Binary Defense, about evolving cybersecurity threats and how businesses can improve their security posture. David talks about the intersection of cybersecurity and privacy, the role of governance in building cybersecurity resilience and protecting data, how AI is shaping cyber threats, and the implications of cyber warfare. He also shares his experience testifying before Congress, explaining why lawmakers struggle to grasp cybersecurity issues. David provides advice on how companies can improve their threat detection and response capabilities and why social media presents a growing risk.…
 
Jason Brenner is the RVP of Healthcare & Lifesciences at LiveRamp and has been working in the advertising and ad tech industries for over 20 years. He is leading efforts on building data connectivity solutions for the healthcare and life sciences industries. Prior to LiveRamp, Jason has held leadership positions at Placed, Verve, PayPal, Time Inc., The New York Times , and Condé Nast. In this episode… Companies in industries like healthcare and life sciences are leveraging data collaboration to collect valuable insights to drive innovation and improve customer experiences. However, for many organizations, balancing data collaboration with privacy, security, and regulatory compliance obligations remains a significant challenge. With consumer trust at stake, and the risks of improper data handling, how can companies balance innovation with responsible data use? Data collaboration in healthcare presents both opportunities and challenges. Companies need to adopt privacy-by-design principles and engage legal and privacy teams early in the process. By implementing techniques such as data tokenization and de-identification, businesses can extract valuable insights while minimizing privacy and security risks. That's why companies like LiveRamp are making this process easier with a platform that transforms personally identifiable information into non-reversible tokens, allowing organizations to use data responsibly while minimizing privacy and security risks. In this episode of She Said Privacy/He Said Security , Jodi and Justin Daniels speak with Jason Brenner, RVP of Healthcare and Life Sciences at LiveRamp, about the critical role of privacy and security in data collaboration. Jason shares insights on how organizations are navigating a complex and fragmented regulatory landscape, the importance of adopting privacy-by-design principles, and engaging legal and privacy teams early in the process. He also shares how businesses can minimize data retention risks, the role of de-identification and tokenization in protecting sensitive information, and the importance of building customer trust through responsible data practices.…
 
Niel Harper is a Certified Director and ISACA Board Vice Chair. He is also the Chief Information Security Officer and Data Protection Officer at Doodle. Niel is based in Germany. He has more than 20 years of experience in IT risk management, cybersecurity, privacy, Internet governance and policy, and digital transformation. Safia Kazi is the Privacy Professional Practices Principal at ISACA. She has worked at ISACA for just over a decade, initially working on ISACA’s periodicals and now serving as the Privacy Professional Practices Principal. She is based in Chicago. In 2021, she was a recipient of the AM&P Network’s Emerging Leader award, which recognizes innovative association publishing professionals under the age of 35. In this episode… ISACA’s State of Privacy 2025 survey reveals that privacy professionals are facing significant hurdles, including staffing shortages, budget cuts, and increasing demands for technical privacy expertise. Many organizations are shifting privacy responsibilities to legal and security teams, without additional resources or training. At the same time, AI adoption is increasing, introducing new complexities and risks. With privacy budgets under strain and teams expected to do more with less, how can businesses sustain effective privacy programs while navigating new challenges? According to ISACA’s State of Privacy 2025 survey, one of the most pressing concerns for privacy teams is the growing demand for technical privacy expertise. Privacy by design also remains a challenge, with limited resources making it difficult for teams to embed privacy into product development from the outset. AI also plays a growing role in privacy operations, helping automate processes while raising concerns about data security, bias, and third-party risks. Despite these findings from ISACA’s survey, businesses can make privacy sustainable by fostering a culture of privacy awareness from the top down, ensuring leadership understands the value of privacy beyond compliance. In this episode of She Said Privacy/He Said Security , Jodi and Justin Daniels speak with Niel Harper, Certified Director and Board Vice Chair at ISACA and CISO and DPO at Doodle, and Safia Kazi, Privacy Professional Practices Principal at ISACA, about the findings from ISACA’s State of Privacy 2025 survey. Safia explains how privacy professionals can adapt to changes by continuously learning and staying informed on emerging risks, while Niel highlights the need for board-level privacy advocacy. They also explore how organizations are adapting to staffing shortages and budget constraints, the impact of AI on privacy operations, and how organizations can effectively navigate emerging risks.…
 
Stephen Bolinger, Chief Privacy Officer at Informa, has a career that spans three continents and more than two decades, with the last seventeen years devoted to privacy and data protection matters across a range of industries, including tech, medical devices, and financial services. Stephen produced a fascinating film called Privacy People. In this episode… As technology evolves and cultural perspectives shift, so does the debate over privacy. With each new tech innovation, from smartphones to AI, companies are collecting more personal information than ever, leading some to claim that privacy is dead. Meanwhile, businesses are navigating a fragmented regulatory landscape, particularly in the United States, where varying laws create compliance challenges. These growing concerns raise the question: is privacy dead, or is it just evolving? Cultural perspectives on privacy differ significantly, influencing how laws are structured in regions like the U.S., Europe, and Australia. While some nations treat privacy as a human right, others see it as a consumer protection issue. To address these concerns, companies need to integrate privacy into their overall data governance strategies, ensuring responsible data collection and AI oversight. As privacy expectations shift, businesses need to adapt, recognizing that privacy is not disappearing — it is being redefined, reinforcing the need for dedicated privacy professionals. In this episode of the She Said Privacy/He Said Security podcast, Jodi and Justin Daniels chat with Stephen Bolinger, Chief Privacy Officer at Informa, about the evolving role of privacy professionals and how cultural differences influence data protection expectations worldwide. Stephen discusses the challenges of navigating privacy laws across different countries, the increasing importance of data and AI governance, and why privacy professionals need to expand their expertise beyond compliance to address broader ethical implications and technological advancements. Stephen also highlights his latest project, a documentary film entitled Privacy People, which sheds light on the complexities of data privacy.…
 
Dave Sampson is the Vice President of Cyber Risk & Strategy at Thrive. In his role, he heads Thrive’s Consulting Practice, where he and his team of experts join forces with clients to deliver strategic guidance on a range of topics, including cybersecurity, IT operations, Cloud, Microsoft 365, compliance, disaster recovery planning, and more. Over the course of his extensive career, Dave has taken up various influential positions in the industry. He served as a Senior Consulting Technical Solution Manager at IBM, was Executive Vice President and Chief Technology Officer at Itrica, founded and served as CEO of Cloud Provider USA, and held the position of Chief Technology Officer at ColoSpace. Dave holds an MBA from Northeastern University and a BS in Communication and Media Studies from Emerson College. He is a former elected official in his hometown of Sandwich, MA. In this episode… As the cyber threat landscape becomes more unpredictable, organizations often struggle with implementing and managing different security tools and ensuring systems communicate effectively to keep up with threats. Organizations can no longer afford to take a reactive approach. Without a clear strategy and proper proactive security measures, organizations face operational disruptions, increased vulnerability to attacks, and challenges in responding to security incidents. So, how can companies take a smarter, more proactive approach to cyber risk management? A proactive cybersecurity strategy isn’t just about having the right tools — it’s about integrating these tools effectively and ensuring visibility across systems to detect risk and prepare for worst-case scenarios. Companies like Thrive are making this process more efficient with their security platform that combines industry leading security tools, real-time monitoring, and AI-driven automation into a cohesive, managed solution that helps companies optimize operations and mitigate cyber risks. Yet, beyond the technology, companies also need to establish a disaster recovery plan, maintain transparency with third-party vendors, and perform privacy and security risk assessments to further enhance security and privacy measures and incident preparedness. In this episode of She Said Privacy/He Said Security , Jodi and Justin Daniels talk with Dave Sampson, VP of Cyber Risk & Strategy at Thrive, about the critical components of cyber risk management. Dave discusses the challenges of integrating security solutions, the lessons learned from the CrowdStrike incident, and how AI is both a threat and an advantage in cybersecurity. He shares insights on cybersecurity best practices and discusses the growing need for outsourcing cybersecurity expertise, the role of third-party risk management, and the importance of disaster recovery planning. Dave also offers practical tips for strengthening both personal and enterprise security, emphasizing the need for vigilance, adaptability, and a proactive security mindset.…
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play