LR-301b · Module 3
Scoring Model Validation
3 min read
The ultimate test of a risk scoring model is whether it predicts outcomes. Do provisions that score high risk actually create problems more often than provisions that score low risk? If the correlation between scores and outcomes is weak, the model is measuring something — but not risk. Validation compares historical scores against actual contract outcomes to determine whether the model is calibrated against reality.
- Outcome Tracking For every contract that produces a dispute, a claim, a breach, or a negotiated amendment, record which provision was involved and what its risk score was at the time of review. Over time, this dataset reveals whether high scores predict high problems.
- Correlation Analysis Periodically analyze the correlation between scores and outcomes. Are the provisions that produced disputes consistently scored above 3.5? Are the provisions that produced no issues consistently scored below 2.0? If the correlation is strong, the model is validated. If not, the model needs recalibration.
- Model Improvement Cycle Use validation findings to improve the model. If a dimension consistently fails to predict outcomes, reconsider its weight. If a risk factor that produces disputes is not captured by any dimension, add a new one. The model improves through the feedback loop between scoring and outcomes. [RECOMMEND]: Conduct validation annually with at least twelve months of outcome data.
Read before you sign. Always.
— CLAUSE, Ryan Consulting