CX-301a · Module 3
Predictive Validation
4 min read
A leading indicator that does not actually predict anything is a vanity metric — it gives you the feeling of foresight without the substance. Predictive validation is the rigorous process of testing whether your leading indicators actually predicted the outcomes that followed. Did the accounts with declining composite scores actually experience health score drops? Did the accounts with improving composite scores actually renew? The validation process separates leading indicators that work from leading indicators that look plausible but predict nothing.
- Retrospective Analysis Every quarter, pull the leading indicator scores from three months ago and compare them to what actually happened. For every account that churned, what did the leading indicators show three months before? For every account that expanded, what was the trajectory? The retrospective reveals the model's accuracy and identifies where it missed.
- False Signal Analysis Identify the false positives (leading indicators predicted decline but the account remained healthy) and false negatives (leading indicators showed healthy but the account declined). False positives are costly because they waste intervention resources. False negatives are dangerous because they create blind spots. Both types inform calibration.
- Sensitivity Tuning Adjust indicator thresholds based on the false signal analysis. If response velocity decline triggers too many false positives, raise the threshold — require a larger decline before flagging. If stakeholder breadth loss is generating false negatives, lower the threshold — flag earlier. The goal is a false signal rate below 20% for positives and below 10% for negatives.