KM-201a · Module 3

The Knowledge Architecture Audit: ATLAS's Evaluation Framework

5 min read

Most organizations that engage a knowledge architect do not have a blank slate. They have an existing system — a Confluence space, a Google Drive structure, a Notion workspace, a SharePoint site — that was built without a formal architecture process and has accumulated three to seven years of entropy. The knowledge architecture audit is the diagnostic that identifies exactly where the system is failing and prescribes the minimum intervention required to fix it.

The audit evaluates the system against all four pillars: taxonomy, structure, retrieval, and governance. For each pillar, the audit produces a health score, a root cause analysis of the primary failure mode, and a prioritized remediation plan. The pillars are evaluated independently because the failure mode is almost never uniform — most organizations have one pillar that is severely degraded and three that are acceptable, and the correct intervention is targeted, not a full rebuild.

  1. Taxonomy Audit Pull the full category tree and document count per category. Flag: categories with zero documents (orphaned structure), categories with over 150 documents (need splitting), top-level categories that map directly to org units (architectural risk), categories that are synonyms (inconsistent classification will result). Run the predictability test with five users on 20 representative documents. Score the taxonomy on a 1–5 scale: 5 is fully predictable and logically structured, 1 is arbitrary and organization-chart-based.
  2. Structure Audit Pull a random sample of 50 documents across all major categories. Evaluate: does each document have a named owner? Does it have a review date? Does it have the minimum required metadata fields? Is the naming convention followed? Does the content match the document type it claims to be? Calculate the percentage of documents with complete, compliant structure. Below 70% compliance requires a mandatory schema migration before any other investment. Above 90% is healthy.
  3. Retrieval Audit Collect 20 real queries from users (not test queries written by the KM team — actual questions employees have asked colleagues in the past 30 days). Run each query through the current retrieval system. Evaluate: was the right document surfaced in the top 3 results? If not, was it because of retrieval failure (the document exists but was not found) or a knowledge gap (the document does not exist)? Calculate retrieval precision. Below 60% is a retrieval architecture failure. Above 85% is functional.
  4. Governance Audit Pull the percentage of documents with named human owners. Pull the percentage with review dates that have not passed. Pull the count of documents with passed review dates that have not been reviewed. Interview three domain leads: can they identify which documents in their area are outdated? Do they know when their next review cycle is? Governance is failing if: more than 20% of documents have no named owner, more than 15% have passed their review date, or domain leads cannot describe their review responsibilities.

The audit output is a prioritized intervention map, not a list of everything wrong. Every knowledge system has problems — the audit identifies the problems that are causing the most retrieval failures and trust erosion, which are not always the most visible problems. A taxonomy with 40 near-duplicate categories looks bad in an audit spreadsheet but may be causing fewer user failures than a governance model where 30% of documents have no owner and have not been reviewed in three years.

The remediation priority order is: governance first, then structure, then taxonomy, then retrieval. This is the three-layer rule applied to knowledge architecture. Governance failures (no ownership, no review process) cannot be solved by improving retrieval quality — the underlying problem is at the governance layer. Structure failures (incomplete metadata, inconsistent schemas) cannot be solved by taxonomy refactoring. Fix the foundational layer first. The upper layers depend on it.

Do This

  • Audit all four pillars independently before prescribing an intervention
  • Use real user queries for the retrieval audit, not synthetic test cases
  • Prioritize remediation by the pillar with the highest failure impact, not the most visible problem
  • Apply the three-layer rule: fix governance before structure, structure before taxonomy, taxonomy before retrieval

Avoid This

  • Prescribe a platform migration as the solution to governance failures
  • Run the audit once and use the findings permanently — reaudit annually
  • Fix taxonomy cosmetically while leaving governance unfunded
  • Treat the audit as a report to file rather than a remediation plan to execute