EC-301d · Module 3
When Charts Lie Accidentally
4 min read
The most dangerous chart in an executive deck is not the one that lies deliberately. It is the one that misleads accidentally — the presenter did not intend to deceive, but the visual creates a false impression that drives a bad decision. Accidental deception is worse than intentional deception in one critical way: you have not checked for it. You believed the chart was accurate, so you defended it confidently, and now you own the bad decision it produced.
Four patterns account for the majority of accidental deception in executive charts. Truncated axes make small differences look enormous — a trend line from 93% to 97% with an axis starting at 90% looks like doubling; with an axis starting at 0% it looks like a minor improvement. Cherry-picked timeframes exclude the period when performance was worse, making recent results look like a sustained trend rather than a recovery. Survivorship bias presents the average outcome of successful deployments while excluding the deployments that failed. Vanity metrics show impressive-sounding numbers (events processed, documents analyzed, queries handled) that do not connect to business value.
- Check: does the axis start at zero? If the y-axis does not start at zero, ask whether the visual difference exaggerates the actual difference. A 4-point improvement shown on an axis from 90 to 100 looks like a 40% improvement. On an axis from 0 to 100, it looks like what it is: a 4-point improvement. Use truncated axes only when zero is meaningless for the metric (e.g., temperature, pH). For business metrics, start at zero unless you have an explicit reason not to — and label the reason.
- Check: does the timeframe include the bad periods? If you are showing a trend that starts at the bottom of a dip and ends at a peak, you are showing cherry-picked data. The executive who asks "what was performance before that starting point?" will expose the cherry-pick. Show the full available timeframe, and annotate any anomalies rather than cropping them out.
- Check: does the average include the failures? If the chart shows "average ROI for AI deployments: 340%," ask whether the deployments that failed to generate ROI are included in that average. Survivorship bias in peer benchmarks is common. If you are using industry data, validate the methodology and note any exclusions. If the benchmark excludes failures, say so — or use a different benchmark.
- Check: does the metric connect to business value? Before including any metric, run this test: if the executive asks "what does that mean for our P&L?" can you answer directly? "We processed 2.4 million events" does not pass the test. "We reduced manual review labor by $1.8M annually" does. Include only metrics that pass the test. Flag the metrics that do not pass as volume indicators, not value indicators.