SD-201b · Module 1

Deal Scoring Models That Predict

4 min read

Rep-reported pipeline stage is wrong 44% of the time. That is not an insult — it is a measurement. CIPHER tracked the correlation between rep-reported stage and actual outcome across 6,800 opportunities. Reps are optimists. The data is not.

AI deal scoring replaces gut feel with pattern recognition. Instead of asking a rep "how confident are you?" — which produces comforting lies — you feed engagement data, stakeholder activity, email sentiment, and timeline signals into a model trained on your actual closed-won and closed-lost deals.

AI DEAL SCORING FRAMEWORK
=========================

ENGAGEMENT SCORE (40% weight)
  - Champion response time (< 4 hrs = high signal)
  - Meeting attendance rate (champion + stakeholders)
  - Content engagement (proposals opened, links clicked)
  - Email sentiment trajectory (improving / flat / declining)

STAKEHOLDER SCORE (25% weight)
  - Multi-threading depth (contacts engaged per account)
  - Economic buyer identified and engaged (Y/N)
  - Champion access to decision authority (direct / indirect)
  - Stakeholder sentiment across contacts

PROCESS SCORE (20% weight)
  - Stage velocity vs. average (ahead / on-pace / behind)
  - Defined next steps with dates (Y/N)
  - Mutual action plan in place (Y/N)
  - Legal / procurement engaged (when applicable)

TIMING SCORE (15% weight)
  - Compelling event identified (Y/N)
  - Budget cycle alignment (in-cycle / off-cycle)
  - Competitive pressure timeline
  - Prospect-stated decision date vs. actual pace

The scoring framework above produces a 0-100 composite score for every deal. But here is the part most teams miss: the score is not the insight. The insight is the gap between the score and where the deal should be at that stage.

A deal at Proposal stage with a score of 45 is in trouble — by that stage, the average closed-won deal scores 68. That 23-point gap tells you exactly where the deal is weak. No champion engagement? Low stakeholder score. Moving slowly? Process score drops. No compelling event? Timing score craters. The model does not just score — it diagnoses.

Do This

  • Score every deal weekly using engagement, stakeholder, process, and timing signals
  • Compare deal scores to stage-appropriate benchmarks from your closed-won data
  • Use score gaps to diagnose specific weaknesses and coach accordingly

Avoid This

  • Rely on rep self-reported confidence — it correlates with personality, not probability
  • Use a single metric (like "days in stage") as a proxy for deal health
  • Score deals once and forget — signals change weekly and so should scores