Triangulation Over Attribution: Why the Best Marketers Use Three Models, Not One
The pressure to measure marketing effectiveness has never been stronger. Finance teams demand accountability. Privacy regulation dismantles the data infrastructure marketers built their assumptions on. And the executives who greenlit those campaigns want to know whether the investment paid off.
In response, marketers have gravitated toward a comfortable fiction: that a single measurement model can answer all the questions.
It doesn't work that way. The best-performing teams have abandoned the search for the single source of truth. Instead, they use three complementary models: media mix modelling (MMM), incrementality testing, and multi-touch attribution. Each answers a different question. Each reveals what the others miss. Together, they provide a far more reliable picture of what actually drives revenue.
This is measurement triangulation. It's not a new idea in science, but it's still rare in marketing. Those who use it outpace their peers.
Why One Model Fails
Before examining the three models, it's worth understanding why none of them works alone.
Multi-touch attribution is the most familiar to most practitioners. It's granular, real-time, and it sits naturally in your marketing stack.
But attribution is also fundamentally biased. It systematically overstates the contribution of bottom-funnel channels and last-click touchpoints, because that's where the conversion measurement infrastructure naturally sits.
Media mix modelling occupies the opposite end of the spectrum. It takes historical spend and sales data and uses statistical regression to estimate the contribution of each channel over time. It's excellent at strategic allocation questions.
But MMM is slow. You need months or years of historical data to build a reliable model. It updates quarterly at best.
Incrementality testing, meanwhile, is the gold standard for proving causation. Run a well-designed experiment: hold out a portion of your audience from an advertising exposure, measure the difference in behaviour between the test and control groups, and you've isolated the true causal impact.
But incrementality testing is expensive and disruptive. Most teams can run perhaps one or two rigorous tests per quarter.
So what happens in practice? Teams rely on whichever model is easiest to access and trust least the others, creating friction and confusion.
The Case for Triangulation
Triangulation is a surveying technique. When you want to determine the precise location of a distant point, you measure its angle from two or more known positions. The multiple measurements converge on a single location, and the convergence itself provides confidence in the result.
Marketing measurement works the same way. By using three models simultaneously, you gain three advantages: coverage, validation, and diagnostic insight.
Coverage means that the three models collectively answer more questions than any single one. Attribution handles tactical optimisation. MMM handles strategic allocation. Incrementality provides proof of causation.
Validation means that where the models converge, you can be confident in the result.
Diagnostic insight comes from understanding why the models disagree. Attribution might show social media as a top contributor, but MMM suggests its incremental impact is minimal. The explanation is often that social is attracting an audience that would have converted anyway. That's valuable insight.
The Three Models Explained
Media Mix Modelling: Strategic Perspective
Media mix modelling uses regression analysis to estimate the contribution of each marketing channel to overall business outcomes. You feed the model historical weekly or monthly spend data across all channels, along with sales or conversion data for the same periods.
The core strengths are substantial. MMM works with whatever data you have, including offline channels, brand awareness metrics, and seasonal variation. It can handle very large, complex marketing mixes.
The weaknesses are equally important. MMM requires a long historical baseline: at least 2-3 years of clean, consistent data. It updates slowly. The model's outputs are probabilistic estimates with confidence intervals, not precise truths.
MMM is where you start for strategic questions. Where should we allocate our budget next year? Which channels are mature and saturated?
Incrementality Testing: Causal Proof
Incrementality testing works by deliberately varying your marketing exposure and measuring the outcome difference. You identify an audience segment, randomly split it into test and control groups, and measure whether the test group's behaviour differs from the control group's.
The resulting difference is the incremental impact: the amount of revenue, conversions, or customer value directly caused by that ad exposure.
The strengths are compelling. There is no attribution bias. The result is a clear causal statement.
The challenges are significant. You need substantial audience scale. You must withhold marketing from the control group. And the tests take weeks or months to conclude.
Most mature organisations run incrementality tests quarterly on high-priority channels or campaigns.
Multi-Touch Attribution: Tactical Layer
Attribution models assign credit for a conversion to the various touchpoints in the customer's journey.
Different attribution models answer this differently. Last-click attribution gives all the credit to the final touchpoint. First-touch credits the first. Linear divides credit equally. Data-driven attribution uses machine learning.
The advantages are straightforward. Attribution is granular. It's real-time. It's native to your ad platform.
The disadvantages are equally well known. Attribution systematically overstates bottom-funnel channels. It cannot credit unmeasured channels. And it is eroded by privacy regulation and walled gardens.
Attribution is best used as a tactical optimisation layer, not as strategic truth.
How the Models Converge and Diverge
In a well-functioning measurement system, the three models should produce results that are broadly consistent, though they'll differ in specifics because they answer different questions.
Attribution tells you: search and social are our top-performing channels, by conversion volume.
MMM tells you: social's ROI has declined as we've increased spend, suggesting diminishing returns. Display seems less efficient than attribution suggests.
Incrementality tells you: search genuinely causes conversions. Social also causes conversions, but the incremental impact is weaker than attribution suggests.
When all three models point the same direction, confidence is high. When they diverge, it's not a flaw: it's information.
Reconciling Conflicting Signals
Conflicts between models are common and they're not inherently a problem. Understanding the source of the conflict is where the value lies.
Suppose attribution shows email as a top converter, but incrementality testing shows a much lower incremental effect. The most likely explanation is that email is reaching an audience that's already deeply engaged and likely to convert anyway.
Another common conflict: MMM shows a channel as high-ROI, but attribution is weak and incrementality testing shows little effect. This typically means the channel is absorbing demand that already exists elsewhere in your marketing mix.
These conflicts are not problems to be frustrated by. They're insights. They're often the moment when a marketing team stops wasting money on redundant channels.
Implementing Triangulation: A Practical Framework
Building a triangulation system requires investment, but it's not unreachable for most organisations with meaningful marketing spend.
Start with what you have. If you're already using an attribution model, you have the first leg of the triangle.
Next, implement MMM if you have two years of clean data and at least GBP 100,000 in monthly marketing spend.
Finally, design a portfolio of incrementality tests. Identify your top three to five channels by spend. Design one test per quarter for each.
The Calibration Loop
Once you have all three models, the work shifts from building them to maintaining them and using them together.
Create a quarterly calibration process. Sit down with results from all three models. Where do they agree? Where do they disagree?
Use incrementality results to inform MMM. Use MMM to guide your incrementality testing roadmap. Use attribution to spot tactical opportunities.
The three models inform each other in a continuous loop. That's where triangulation delivers its full value.
---
The temptation to find a single measurement model that answers all questions is understandable. Organisations prefer simplicity.
But measurement doesn't work that way. No single model captures the full picture.
The best-performing marketing teams have accepted this. They use three models because they answer three different, essential questions. MMM tells you where to allocate budget strategically. Incrementality tells you which channels genuinely cause outcomes. Attribution tells you where to optimise tactically. Together, they converge on reality.
That convergence is what makes triangulation powerful. It doesn't eliminate measurement uncertainty. But it vastly reduces it.
