Artwork

Content provided by Phil Gamache. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Phil Gamache or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

176: Rajeev Nair: Causal AI and a unified measurement framework

1:08:49
 
Share
 

Manage episode 491827852 series 2796953
Content provided by Phil Gamache. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Phil Gamache or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What’s up everyone, today we have the pleasure of sitting down with Rajeev Nair, Co-Founder and Chief Product Officer at Lifesight.

Summary: Rajeev believes measurement only works when it’s unified or multi-modal, a stack that blends multi-touch attribution, incrementality, media mix modeling and causal AI, each used for the decision it fits. At Lifesight, that means using causal machine learning to surface hidden experiments in messy historical data and designing geo tests that reveal what actually drives lift. Attribution alone can’t tell you what changed outcomes. Rajeev’s team moved past dashboards and built a system that focuses on clarity, not correlation. Attribution handles daily tweaks. MMM guides long-term planning. Experiments validate what’s real. Each tool plays a role, but none can stand alone.

About Rajeev

Rajeev Nair is the Co-Founder and Chief Product Officer at Lifesight, where he’s spent the last several years shaping how modern marketers measure impact. Before that, he led product at Moda and served as a business intelligence analyst at Ebizu. He began his career as a technical business analyst at Infosys, building a foundation in data and systems thinking that still drives his work today.

Digital Astrology and the Attribution Illusion

Lifesight started by building traditional attribution tools focused on tracking user journeys and distributing credit across touchpoints using ID graphs. The goal was to help brands understand which interactions influenced conversions. But Rajeev and his team quickly realized that attribution alone didn’t answer the core question their customers kept asking: what actually drove incremental revenue? In response, they shifted gears around 2019, moving toward incrementality testing.

They began with exposed versus synthetic control groups, then evolved to more scalable, identity-agnostic methods like geo testing. This pivot marked a fundamental change in their product philosophy; from mapping behavior to measuring causal impact.

Rajeeve shares his thoughts on multi-touch attribution and the evolution of the space.

The Dilution of The Term Attribution

Attribution has been hijacked by tracking. Rajeev points straight at the rot. What used to be a way to understand which actions actually led to a customer buying something has become little more than a digital breadcrumb trail. Marketers keep calling it attribution, but what they're really doing is surveillance. They're collecting events and assigning credit based on who touched what ad and when, even if none of it actually changed the buyer’s mind.

The biggest failure here is causality. Rajeev is clear about this. Attribution is supposed to tell you what caused an outcome. Not what appeared next to it. Not what someone happened to click on right before. Actual cause and effect. Instead, we get dashboards full of correlation dressed up as insight. You might see a spike in conversions and assume it was the retargeting campaign, but you’re building castles on sand if you can’t prove causality.

Then comes the complexity problem. Today’s marketing stack is a jungle. You have:

  • Paid ads across five different platforms
  • Organic content
  • Discounts
  • Seasonal shifts
  • Pricing changes
  • Product updates

All these things impact results, but most attribution models treat them like isolated variables. They don’t ask, “What moved the needle more than it would’ve moved otherwise?” They ask, “Who touched the user last before they bought?” That’s not measurement. That’s astrology for marketers.

“Attribution, in today’s marketing context, has just come to mean tracking. The word itself has been diluted.”

Multi-touch attribution doesn’t save you either. It distributes credit differently, but it’s still built on flawed data and weak assumptions. If you’re measuring everything and understanding nothing, you’re just spending more money to stay confused. Real marketing optimization requires incrementality analysis, not just a prettier funnel chart.

To Measure What Caused a Sale, You Need Experiments

Even with perfect data, attribution keeps lying. Rajeev learned that the hard way. His team chased the attribution grail by building identity graphs so detailed they could probably tell you what toothpaste a customer used. They stitched together first-party and third-party data, mapped the full user journey, and connected every touchpoint from TikTok to in-store checkout. Then they ran the numbers. What came back wasn’t insight. It was statistical noise.

Every marketing team that has sunk months into journey mapping has hit the same wall. At the bottom of the funnel, conversion paths light up like a Christmas tree. Retargeting ads, last-clicked emails, discount codes, they all scream high correlation with purchase. The logic feels airtight until you realize it's just recency bias with a data export. These touchpoints show up because they’re close to conversion. That doesn’t mean they caused it.

“Causality is essentially correlation plus bias. Can we somehow manage the bias so that we could interpret the observed correlation as causality?”

What Rajeev means is that while correlation on its own proves nothing, it’s still the starting point. You need correlation to even guess at a causal link, but then you have to strip out all the bias (timing, selection, confounding variables) before you can claim anything actually drove the outcome. It’s a messy process, and attribution data alone doesn’t get you there.

That’s the puzzle. You can’t infer real marketing effectiveness just from journey data. You can’t say the billboard drove walk-ins if everyone had to walk past it to enter the store. You can’t say coupons created conversions if they were handed out after someone had already walked in. Attribution doesn’t answer those questions. It only tells you what happened. It doesn’t explain why it happened.

To measure causality, you need experiments. Rajeev gives it straight: run controlled tests. Put a billboard at one store, skip it at another. Offer discounts to some, hold them back from others. Then compare outcomes. Only when you hold a variable constant and see lift can you say something worked. Attribution on its own is just a correlation engine. And correlation, without real-world intervention, tells you absolutely nothing useful.

Key takeaway: Attribution data without controlled testing isn’t useful. If you want to know what drives results, design experiments. Stop treating customer journeys like gospel. Use journey data as a starting point, then isolate variables and measure actual lift. That way you can make real decisions instead of retroactively rationalizing whatever got funded last quarter.

The Limitations of Incrementality Tests and How Quasi-Experiments Can Help

Most teams think they’re being scientific when they run an incrementality test. But the truth is, these tests are fragile. Geo tests are high-effort and easy to mess up. Quasi experiments are directional at best and misleading at worst. If you’re not careful with design, timing, and interpretation, you’ll end up with results that look rigorous… but aren’t.

Why Most Teams Get Geo Testing Completely Wrong

Geo testing gets romanticized as this high-integrity measurement method, but most teams treat it like a side quest. They run it once, complain it was expensive, then go back to attribution dashboards because they're easier to screenshot in a slide deck. The truth is, geo testing takes guts. It means pulling spend from regions that bring in real revenue. That’s not a simulation. It’s a real-world test with real-world consequences.

Rajeev breaks it down with...

  continue reading

177 episodes

Artwork
iconShare
 
Manage episode 491827852 series 2796953
Content provided by Phil Gamache. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Phil Gamache or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

What’s up everyone, today we have the pleasure of sitting down with Rajeev Nair, Co-Founder and Chief Product Officer at Lifesight.

Summary: Rajeev believes measurement only works when it’s unified or multi-modal, a stack that blends multi-touch attribution, incrementality, media mix modeling and causal AI, each used for the decision it fits. At Lifesight, that means using causal machine learning to surface hidden experiments in messy historical data and designing geo tests that reveal what actually drives lift. Attribution alone can’t tell you what changed outcomes. Rajeev’s team moved past dashboards and built a system that focuses on clarity, not correlation. Attribution handles daily tweaks. MMM guides long-term planning. Experiments validate what’s real. Each tool plays a role, but none can stand alone.

About Rajeev

Rajeev Nair is the Co-Founder and Chief Product Officer at Lifesight, where he’s spent the last several years shaping how modern marketers measure impact. Before that, he led product at Moda and served as a business intelligence analyst at Ebizu. He began his career as a technical business analyst at Infosys, building a foundation in data and systems thinking that still drives his work today.

Digital Astrology and the Attribution Illusion

Lifesight started by building traditional attribution tools focused on tracking user journeys and distributing credit across touchpoints using ID graphs. The goal was to help brands understand which interactions influenced conversions. But Rajeev and his team quickly realized that attribution alone didn’t answer the core question their customers kept asking: what actually drove incremental revenue? In response, they shifted gears around 2019, moving toward incrementality testing.

They began with exposed versus synthetic control groups, then evolved to more scalable, identity-agnostic methods like geo testing. This pivot marked a fundamental change in their product philosophy; from mapping behavior to measuring causal impact.

Rajeeve shares his thoughts on multi-touch attribution and the evolution of the space.

The Dilution of The Term Attribution

Attribution has been hijacked by tracking. Rajeev points straight at the rot. What used to be a way to understand which actions actually led to a customer buying something has become little more than a digital breadcrumb trail. Marketers keep calling it attribution, but what they're really doing is surveillance. They're collecting events and assigning credit based on who touched what ad and when, even if none of it actually changed the buyer’s mind.

The biggest failure here is causality. Rajeev is clear about this. Attribution is supposed to tell you what caused an outcome. Not what appeared next to it. Not what someone happened to click on right before. Actual cause and effect. Instead, we get dashboards full of correlation dressed up as insight. You might see a spike in conversions and assume it was the retargeting campaign, but you’re building castles on sand if you can’t prove causality.

Then comes the complexity problem. Today’s marketing stack is a jungle. You have:

  • Paid ads across five different platforms
  • Organic content
  • Discounts
  • Seasonal shifts
  • Pricing changes
  • Product updates

All these things impact results, but most attribution models treat them like isolated variables. They don’t ask, “What moved the needle more than it would’ve moved otherwise?” They ask, “Who touched the user last before they bought?” That’s not measurement. That’s astrology for marketers.

“Attribution, in today’s marketing context, has just come to mean tracking. The word itself has been diluted.”

Multi-touch attribution doesn’t save you either. It distributes credit differently, but it’s still built on flawed data and weak assumptions. If you’re measuring everything and understanding nothing, you’re just spending more money to stay confused. Real marketing optimization requires incrementality analysis, not just a prettier funnel chart.

To Measure What Caused a Sale, You Need Experiments

Even with perfect data, attribution keeps lying. Rajeev learned that the hard way. His team chased the attribution grail by building identity graphs so detailed they could probably tell you what toothpaste a customer used. They stitched together first-party and third-party data, mapped the full user journey, and connected every touchpoint from TikTok to in-store checkout. Then they ran the numbers. What came back wasn’t insight. It was statistical noise.

Every marketing team that has sunk months into journey mapping has hit the same wall. At the bottom of the funnel, conversion paths light up like a Christmas tree. Retargeting ads, last-clicked emails, discount codes, they all scream high correlation with purchase. The logic feels airtight until you realize it's just recency bias with a data export. These touchpoints show up because they’re close to conversion. That doesn’t mean they caused it.

“Causality is essentially correlation plus bias. Can we somehow manage the bias so that we could interpret the observed correlation as causality?”

What Rajeev means is that while correlation on its own proves nothing, it’s still the starting point. You need correlation to even guess at a causal link, but then you have to strip out all the bias (timing, selection, confounding variables) before you can claim anything actually drove the outcome. It’s a messy process, and attribution data alone doesn’t get you there.

That’s the puzzle. You can’t infer real marketing effectiveness just from journey data. You can’t say the billboard drove walk-ins if everyone had to walk past it to enter the store. You can’t say coupons created conversions if they were handed out after someone had already walked in. Attribution doesn’t answer those questions. It only tells you what happened. It doesn’t explain why it happened.

To measure causality, you need experiments. Rajeev gives it straight: run controlled tests. Put a billboard at one store, skip it at another. Offer discounts to some, hold them back from others. Then compare outcomes. Only when you hold a variable constant and see lift can you say something worked. Attribution on its own is just a correlation engine. And correlation, without real-world intervention, tells you absolutely nothing useful.

Key takeaway: Attribution data without controlled testing isn’t useful. If you want to know what drives results, design experiments. Stop treating customer journeys like gospel. Use journey data as a starting point, then isolate variables and measure actual lift. That way you can make real decisions instead of retroactively rationalizing whatever got funded last quarter.

The Limitations of Incrementality Tests and How Quasi-Experiments Can Help

Most teams think they’re being scientific when they run an incrementality test. But the truth is, these tests are fragile. Geo tests are high-effort and easy to mess up. Quasi experiments are directional at best and misleading at worst. If you’re not careful with design, timing, and interpretation, you’ll end up with results that look rigorous… but aren’t.

Why Most Teams Get Geo Testing Completely Wrong

Geo testing gets romanticized as this high-integrity measurement method, but most teams treat it like a side quest. They run it once, complain it was expensive, then go back to attribution dashboards because they're easier to screenshot in a slide deck. The truth is, geo testing takes guts. It means pulling spend from regions that bring in real revenue. That’s not a simulation. It’s a real-world test with real-world consequences.

Rajeev breaks it down with...

  continue reading

177 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play