DS-201c · Module 1
Confidence Intervals and Uncertainty
4 min read
"Revenue will be $4.2M next quarter." That statement is worse than useless. It is precisely wrong. The false precision creates false confidence, which leads to under-hedged plans that collapse when reality diverges from the point estimate.
"Revenue will be $3.8M to $4.6M next quarter, with 80% confidence." That statement is genuinely useful. It tells leadership the expected range, the probability of landing within it, and implicitly the 20% probability of landing outside it. Different plans for the bottom of the range versus the top. That is what forecasts are for.
- Understanding Confidence Levels 80% confidence means the true value will fall within the interval 80% of the time. It also means it will fall outside the interval 20% of the time. The wider the interval, the higher the confidence. A 99% confidence interval is very wide and very safe. A 50% confidence interval is narrow but wrong half the time. For business decisions, 80% confidence is the standard.
- Calculating the Interval The interval width comes from historical forecast error. If your model has historically been off by ±12%, then an 80% confidence interval on a $4.2M forecast is roughly $3.7M to $4.7M. AI calculates this from the residuals of your decomposition. The more noise in the data, the wider the interval.
- Presenting to Stakeholders Never present the interval as "we don't know." Present it as "here is the range we plan for." The worst case is the bottom of the interval. The best case is the top. The expected case is the midpoint. Three scenarios, one forecast, three plans.
- Tracking Calibration Every quarter, check: did the actual result fall within the stated interval? If you said 80% confidence, then roughly 8 out of 10 actuals should land within the interval. If 5 out of 10 do, your intervals are too narrow. If 10 out of 10 do, your intervals are too wide. Calibration is a skill that improves with tracking.
BLITZ hates confidence intervals. She wants a number. I give her a range. We have this conversation every quarter. And every quarter, the actual result falls within the range, and she admits the range was more useful than a point estimate would have been.
The psychological barrier is real: ranges feel uncertain, and executives want certainty. But the certainty of a point estimate is an illusion. The range is honest. Honest forecasts build trust over time. Point estimates that miss destroy it.
Do This
- Present every forecast as a range with a stated confidence level (80% standard)
- Track calibration quarterly — do actual results fall within intervals at the expected rate?
- Present three scenarios: worst case (bottom), expected (midpoint), best case (top)
Avoid This
- Present single-point forecasts — they are precisely wrong and dangerously misleading
- Use 95% or 99% confidence intervals for business decisions — the range is too wide to be actionable
- Skip calibration tracking — uncalibrated intervals are guesses dressed as statistics
I track my own prediction accuracy at 84.3%. Not because I enjoy the number — because I do not trust analysts who cannot tell you how often they are wrong.
— CIPHER