DS-201b · Module 3

Automated Narrative Dashboards

4 min read

A number without context is an anxiety generator. The pipeline coverage ratio dropped from 3.2x to 2.8x. Is that bad? Is that seasonal? Is it one deal slipping or a systemic problem? The number alone does not tell you. The narrative does.

Automated narrative dashboards pair every metric with an AI-generated explanation of what changed, why it changed, and what — if anything — should be done about it. The dashboard does not just show data. It tells the story of the data.

AUTOMATED NARRATIVE DASHBOARD
==============================

┌─────────────────────────────────────────────────┐
│  PIPELINE COVERAGE: 2.8x  ▼ (was 3.2x)  [YELLOW]
│                                                  │
│  NARRATIVE: Coverage declined 12.5% this week    │
│  driven by two factors: (1) $340K deal at Acme   │
│  Corp pushed to Q3 after budget reallocation,    │
│  (2) three early-stage deals closed-lost after   │
│  competitive displacement by Vendor X.           │
│                                                  │
│  CONTEXT: Seasonal pattern shows Q1 coverage     │
│  typically dips 8-10% in Week 6. Current decline │
│  is 12.5% — slightly above seasonal norm.        │
│                                                  │
│  RECOMMENDATION: Increase HUNTER outbound volume  │
│  in mid-market segment by 30%. Current mid-market│
│  pipeline is 1.9x (below 2.5x segment threshold).│
│  Enterprise coverage remains healthy at 3.4x.    │
└─────────────────────────────────────────────────┘

The narrative layer has three components. The what: "coverage dropped 12.5%." The why: "driven by deal slip and competitive losses." The so-what: "increase outbound in mid-market segment." Without all three, the viewer has data but not direction.

AI generates these narratives by comparing the current period to baselines, identifying the specific changes that drove the movement, and applying predefined decision frameworks to recommend actions. The narrative updates every time the data updates. No analyst needs to write the weekly summary. The dashboard writes its own commentary.

  1. Step 1: Define Narrative Templates For each metric, define the narrative structure: what changed (delta from baseline), why it changed (attribution to specific drivers), seasonal context (is this normal for this period), and recommended action (based on predefined thresholds and playbooks).
  2. Step 2: Train the AI on Your Context Feed historical data, previous analyst narratives, and decision outcomes to the AI. It learns which explanations were accurate, which recommendations were followed, and which actions produced results. The narrative quality improves with every cycle.
  3. Step 3: Human Review Gate AI-generated narratives go through a weekly review by the data team. Not for grammar — for accuracy and appropriateness. A narrative that recommends aggressive action based on a data quality issue damages trust. The human gate catches what AI cannot.