BI-201c · Module 3
Building Predictive Models
4 min read
Everything in this course — trigger events, health scores, early warning signals, expansion signals — converges into a predictive model that anticipates what will happen in a customer account before it happens. Predictive customer intelligence is not forecasting with a crystal ball. It is pattern recognition applied to behavioral data: customers who exhibit behavior pattern X have a 75% probability of outcome Y within 90 days.
The predictive model starts with historical data. For every customer who churned in the past two years, document the signals that preceded the churn: when did engagement start declining? When did value metrics plateau? When did the first competitive signal appear? For every customer who expanded, document the signals that preceded expansion: what trigger events occurred? What health score trajectory did they follow? What engagement patterns characterized the pre-expansion period? The historical patterns become your prediction templates.
- Analyze Historical Patterns Review your last 10 to 20 customer outcomes — both churns and expansions. For each, build a timeline of signals leading up to the outcome. What engagement changes occurred? What trigger events were detected? What health score movements were visible? The patterns across multiple cases are your prediction model.
- Define Signal Combinations Individual signals are ambiguous. A cancelled meeting could mean anything. A cancelled meeting combined with declining usage and a competitor's product launch means something specific. Define the signal combinations that historically preceded each outcome type. Two or more signals from different categories occurring within the same period is a pattern worth tracking.
- Assign Probability Ranges Based on historical frequency, assign probability ranges to each signal combination. "Executive withdrawal plus usage decline within 60 days preceded churn in 7 of 10 cases — 70% probability." These are not precise predictions. They are calibrated assessments that inform resource allocation. A 70% churn probability warrants immediate executive intervention. A 30% probability warrants increased monitoring.
- Test and Calibrate Track your predictions against outcomes. Were your 70% predictions right 70% of the time? If your predictions are systematically overconfident or underconfident, adjust the probability ranges. Calibration improves with data volume — after tracking 30 to 50 predictions, your model will be significantly more accurate than intuition alone.