DG-301b · Module 2
Model Validation and Iteration
3 min read
An intent scoring model is a prediction engine, and predictions must be validated against outcomes. Without validation, you are making targeting decisions based on a model that might be wrong — and you would never know. Model validation compares the model's predictions (this account is in-market) against actual outcomes (did the account convert?) and uses the gap to improve the model.
- Conversion Rate by Score Band Every quarter, calculate the conversion rate at each score tier. Tier 1 should convert at the highest rate, tier 2 at a moderate rate, and tier 3 at the lowest rate. If tier 2 converts higher than tier 1, the weighting is wrong. If all tiers convert at similar rates, the model is not differentiating — it needs more predictive signals or better weighting.
- Signal Contribution Analysis Analyze which signals contributed most to accounts that actually converted. Did first-party behavioral data predict better than third-party intent? Were hiring signals more predictive than technology signals? The contribution analysis tells you where to invest in better data and where the model is relying on weak signals.
- A/B Model Testing When making significant changes to the model — adding new signal sources, changing weights, or adjusting decay functions — run the new model alongside the old one for 90 days. Compare conversion rates between accounts prioritized by each model. The better-performing model becomes the new production model.
Do This
- Validate the model quarterly by comparing predicted intent scores against actual conversion outcomes
- Analyze which signals contributed most to successful predictions and invest accordingly
- A/B test model changes against the existing model before full deployment
Avoid This
- Trust the model without validation because "the vendor says it works"
- Keep adding signals without checking whether they improve prediction accuracy
- Make major model changes without a test period to compare performance