PE-301a · Module 3
Monitoring Model Performance
3 min read
A propensity model degrades over time because the patterns it learned change. Your sales process evolves, your product changes, new competitors enter, and buyer behavior shifts. A model trained on last year's data produces increasingly inaccurate scores as the current reality diverges from the historical patterns. Monitoring detects this degradation before it makes the scores unreliable.
Do This
- Track calibration monthly — compare predicted close rates against actual close rates by score bucket
- Monitor the AUC (area under curve) metric on new closed deals — a declining AUC indicates the model's ranking ability is degrading
- Compare model-predicted outcomes against actual outcomes for every cohort of deals that closes
Avoid This
- Deploy the model and assume it stays accurate — all models degrade without maintenance
- Wait for someone to complain that scores seem wrong — by then, trust is already damaged
- Retrain on a fixed schedule without checking whether retraining is needed — sometimes the model is fine, sometimes it needed retraining two months ago