CX-201a · Module 1

Calibration & Weighting

3 min read

A health score model that has never been calibrated is a theory. A health score model that has been calibrated against actual retention outcomes is a diagnostic tool. Calibration is the process of adjusting pillar weights, threshold levels, and metric selections based on whether the scores actually predicted what happened. The model that gave a green score to an account that churned has a calibration problem. Find it. Fix it.

  1. Initial Weighting Start with equal weights across pillars — 25% each. This is a hypothesis, not a conclusion. Equal weighting is the neutral starting point that prevents your assumptions from biasing the model before data can inform it.
  2. Retrospective Analysis After six months, analyze every account outcome — renewals, expansions, and churns — against their health scores from three months prior. Which scores predicted the outcomes accurately? Where did the model miss? The pattern in the misses reveals the calibration adjustment needed.
  3. Weight Adjustment Increase the weight of pillars that were predictive. Decrease the weight of pillars that added noise. In most organizations, engagement and adoption are more predictive of short-term retention than outcomes and relationship — because engagement and adoption decline first. Your mileage will vary. The data tells you your specific weightings.
  4. Continuous Recalibration Recalibrate quarterly. The factors that predict churn change as your product matures, your client base evolves, and market conditions shift. A model calibrated on your first twenty clients will not be accurate for your two-hundredth. Build recalibration into your quarterly review cadence.