KM-301c · Module 1
Accountability Structures
5 min read
Knowledge maintenance does not happen spontaneously. It happens because it is someone's actual job — and "actual job" means it appears in their performance criteria, their manager asks about it, and failing to do it has consequences. An accountability structure is the organizational mechanism that makes knowledge maintenance real rather than aspirational.
Most knowledge governance frameworks describe what should happen. The accountability structure describes who is responsible, how performance is measured, and what happens when the maintenance is not done. Without the accountability layer, the governance framework is a document, not a system.
- Make Knowledge Maintenance a Formal Responsibility Stewardship responsibilities belong in job descriptions and performance review criteria. "Maintains knowledge base accuracy for [domain] at a rate of ≥95% reviewed within SLA" is a measurable, trackable responsibility. "Contributes to team knowledge sharing" is not. The specificity matters: vague responsibilities produce vague accountability, which produces no accountability at all when something else is more urgent.
- Define the Measurement System What gets measured gets managed. Steward performance metrics: review completion rate (percentage of items reviewed within their SLA window), staleness rate (percentage of domain content currently past review date), gap resolution rate (percentage of known gaps addressed per quarter), and content accuracy rate (derived from user corrections and flags). These metrics are visible to the steward, their manager, and the central knowledge function. Sunlight is accountability.
- The Manager Layer The steward's manager must care about knowledge governance metrics. If a steward's knowledge health score is in the red and their manager never asks about it, the accountability structure is broken at the manager layer. Quarterly business reviews should include knowledge health as a standing agenda item for teams with stewardship responsibilities. This is not a knowledge team problem to solve — it is a management expectations problem.
- Escalation Paths When a steward consistently fails to maintain their domain — review backlog accumulates, known gaps are not addressed, staleness rate climbs — there must be a defined escalation path. The central knowledge function escalates to the steward's manager. If the issue persists, the central knowledge function can implement temporary centralized management of the domain until a new steward is designated. The nuclear option — removing contributor access for a chronically underperforming steward — is available but should be documented as a last resort, not a first response.
# Knowledge governance metrics — steward scorecard
metrics:
review_completion_rate:
description: "% of content items reviewed within SLA window"
target: ">= 90%"
measurement: "monthly"
formula: "reviewed_on_time / total_due_for_review"
sla_windows:
runbook: 90 days
article: 180 days
decision_record: 365 days
faq: 90 days
staleness_rate:
description: "% of domain content currently past review date"
target: "<= 10%"
measurement: "weekly"
formula: "overdue_items / total_domain_items"
alert_threshold: "15% triggers yellow; 25% triggers red"
gap_resolution_rate:
description: "% of known gaps addressed per quarter"
target: ">= 50%"
measurement: "quarterly"
formula: "gaps_closed / total_open_gaps"
note: "Gaps flagged from Slack, tickets, and user reports."
accuracy_flag_rate:
description: "User-reported accuracy issues per 1000 page views"
target: "<= 2 per 1000"
measurement: "monthly"
formula: "accuracy_flags / (page_views / 1000)"
note: "Zero flags may indicate users have stopped trusting the KB, not that content is accurate."
reporting:
steward_review: monthly (self-reported + automated)
manager_review: quarterly
central_knowledge_function_review: monthly
executive_summary: quarterly