KM-301i · Module 1
Usage Metrics & Retrieval Quality Metrics
4 min read
Usage metrics answer the question "are people using the system?" Retrieval quality metrics answer the question "when people use the system, does it work?" Both matter, but in sequence. A knowledge system with high usage and low retrieval quality is a system that many people use to get wrong answers. A knowledge system with high retrieval quality and low usage is a system that works well for nobody. The measurement framework must track both — and must distinguish between them when diagnosing performance problems.
- Usage Metrics Monthly active users (unique users who performed at least one retrieval), queries per active user per day, return rate (percentage of users who returned within 7 days of first use), and channel distribution (what percentage of queries come from which integration). Usage metrics establish adoption. They do not establish value. A system with high usage that produces wrong answers or fails to find what users need has high activity and negative value.
- Retrieval Quality Metrics Retrieval success rate (percentage of queries where the user engaged with the top result — clicked, copied, or rated positively), zero-result rate (percentage of queries that returned no results — each is a knowledge gap signal), and user-reported relevance (the percentage of results rated as "helpful" or better in explicit feedback flows). These metrics answer whether the system is finding what users need.
- Engagement Depth Metrics Beyond retrieval success, measure what users do with what they find: read completion rate (did the user read the full article or abandon after the first paragraph?), click-through to source rate (did the user follow the deep link to the full document?), and follow-up query rate (did the user ask another question immediately after — indicating the first result was insufficient?). Engagement depth distinguishes a system that finds the right article from one that users skim and abandon.
- System Performance Metrics Retrieval latency (p50, p90, p99 — users abandon at p99 above 3 seconds), integration uptime (what percentage of the time are integrations delivering knowledge when called?), and index freshness (percentage of knowledge base articles within their defined freshness SLA). System performance metrics are the table stakes — a system with good content metrics but poor performance metrics will lose users regardless of quality.
Do This
- Track usage metrics and retrieval quality metrics as separate dimensions — do not average them
- Report the zero-result rate as a knowledge gap metric, not as a system failure metric — it is the most actionable signal in the system
- Use engagement depth metrics to identify content that is being found but not used — these are content quality problems, not retrieval problems
- Set performance SLAs and track against them weekly — performance degradation kills adoption before quality metrics reflect it
Avoid This
- Report page views or total queries as the primary usage metric — they measure activity, not value
- Treat high usage as a proxy for high value — usage measures engagement, not outcome
- Ignore the zero-result rate because it makes the system look incomplete — it is the most direct signal for knowledge investment priority