EI-201c · Module 3

Forecast Accountability

3 min read

Forecast accountability means tracking, publishing, and learning from your forecast accuracy over time. This is the practice that separates professional intelligence from punditry. Pundits make bold predictions and never revisit them. Intelligence professionals make calibrated predictions, track the outcomes, publish the results, and use the data to improve. Your forecast track record is the most important asset your intelligence practice produces — more important than any individual briefing.

  1. Maintain a Forecast Log Every forecast goes into a log with: date, the specific prediction, the probability assigned, the resolution criteria (how will you know if it happened?), the resolution deadline, and eventually, the outcome. Use a spreadsheet. Keep it simple. The discipline of recording predictions is more important than the format of the log.
  2. Resolve and Score Quarterly Each quarter, review all forecasts that have reached their resolution deadline. Score them: did the predicted event happen? Was your probability calibrated — do your 60% predictions resolve positively about 60% of the time? Calculate your Brier score or simply plot your calibration curve. Share the results with your briefing consumers.
  3. Diagnose Misses For every significant miss, conduct a brief post-mortem: what did you get wrong? Was it the signal analysis, the probability estimate, or the timing? Were there signals you missed or misinterpreted? Each miss diagnosis produces a specific improvement for your monitoring or analysis process. Misses are expensive — make sure you extract the maximum learning from each one.

Do This

  • Publish your forecast accuracy to your readers — transparency builds trust even when the numbers are imperfect
  • Celebrate calibration improvements, not individual correct predictions — calibration is the skill, individual predictions are luck
  • Use misses as diagnostic tools — the pattern in your misses reveals systematic biases you can correct

Avoid This

  • Hide your misses and highlight your hits — your readers will do their own scoring, and selective reporting destroys credibility
  • Abandon probabilistic forecasting because your first calibration curve is imperfect — calibration improves with practice
  • Blame external events for misses — if the event was unforeseeable, your probability should have been lower