KM-301i · Module 1

The Vanity Metric Trap

3 min read

Every knowledge management program I have reviewed that was struggling to maintain executive support had the same problem: it was reporting vanity metrics. Articles published. Users onboarded. Searches performed. Sessions this month. These numbers go up. They look like progress. They have no demonstrable connection to business outcomes. When the budget review comes and the CFO asks "what are we getting for this investment?", the answer cannot be "we published 400 articles and had 12,000 searches." The answer must be "we reduced time-to-answer for our support team by 40%, which is equivalent to adding two FTEs without adding headcount."

  1. Identifying Vanity Metrics A vanity metric is any metric that: can increase while value decreases (total articles can grow while retrieval quality falls), measures activity rather than outcome (queries performed vs. queries successfully resolved), or has no clear causal relationship to business impact (user registrations vs. decisions improved by the system). Apply this test to every metric in the reporting framework.
  2. Replacing Vanity Metrics For each vanity metric, identify the outcome metric it is a proxy for. Total articles → knowledge gap rate (the outcome is coverage, not volume). User registrations → monthly active users (the outcome is actual usage, not sign-ups). Search sessions → retrieval success rate (the outcome is finding what was needed, not performing a search). Replace the proxy with the outcome metric, or add the outcome metric alongside it with explicit linkage.
  3. The Metric Review Every reporting framework should be reviewed semi-annually with the question: "Which of these metrics could go up while the system was getting worse?" Every metric that answers "yes" is a candidate for replacement or supplementation. This review is not a one-time cleanup — it is a recurring quality gate on the measurement system itself.