DG-301g · Module 2

Incrementality Testing

3 min read

Attribution models tell you what touchpoints were present before a conversion. Incrementality testing tells you what touchpoints actually caused the conversion. The difference matters enormously: a prospect who was going to buy anyway and happened to see your LinkedIn ad before converting credits the ad in any attribution model, but the ad did not cause the conversion. Incrementality testing isolates causation by comparing a treatment group that received the touchpoint against a control group that did not.

  1. Design the Experiment Select the channel or campaign you want to test. Split a comparable segment into two groups: treatment (receives the campaign) and control (does not). Ensure the groups are similar in ICP fit, engagement history, and territory quality. Run the experiment for 90 days — long enough to capture full deal cycles.
  2. Measure the Lift Compare pipeline outcomes between treatment and control groups. If the treatment group produces 25% more pipeline than the control group, the campaign's incremental contribution is 25%. The control group's pipeline represents what would have happened without the campaign. The lift is the campaign's true value.
  3. Apply to Budget Decisions Use incrementality data to make budget allocation decisions. A campaign that shows 30% incremental lift justifies investment. A campaign that shows 5% lift — despite strong attribution numbers — may be getting credit for conversions that would have happened anyway. Incrementality testing prevents over-investment in channels that look good in attribution but add little incremental value.