KM-301d · Module 3
Measuring Transfer Success
3 min read
You spent weeks extracting expertise from a senior employee. You built the artifact. You ran the review loops. Now how do you know whether it worked? Most organizations never answer this question — they ship the artifact, count completions, and call it a success. That is a vanity metric. Transfer success is measured in performance change, not training completion. The extraction was successful if people who use the artifact perform the task better. Anything less than that is a documentation project with a project plan.
- Baseline Measurement Before deploying the knowledge artifact, measure the current performance of the target population on the task domain. Time-to-proficiency for new hires. Error rate on the task. Quality scores on outputs. Escalation rate for difficult cases. These baselines are your before-state. Without them, you cannot prove the after-state represents improvement.
- Post-Deployment Measurement Measure the same metrics after artifact deployment with sufficient sample size and time. The standard error of measurement in most performance data means you need at least eight to twelve data points per condition before drawing conclusions. If you measure one week after deployment with three users, you have noise, not signal.
- Attribution Control Isolate the effect of the knowledge artifact from concurrent changes — new management, changed incentives, different customer mix. Use a control group if possible: the same population split between those who have access to the artifact and those who do not, for the same period. Attribution without control is correlation, not causation.
- Leading and Lagging Indicators Lagging indicators — performance outcomes — take weeks to months to move. Leading indicators — artifact usage rate, task completion time, help-seeking behavior — move faster and tell you whether the artifact is being used correctly. Both matter. A successful extraction that is never used has produced no transfer. Track usage as a leading indicator of eventual performance impact.
Do This
- Establish baseline performance metrics before deploying any knowledge artifact
- Measure transfer with performance outcomes, not artifact completion rates
- Use control groups or before/after designs to attribute performance change to the artifact
- Track leading indicators (usage, task time) alongside lagging indicators (quality, outcomes)
Avoid This
- Count training completions as a proxy for knowledge transfer — completions measure exposure, not transfer
- Deploy without a measurement plan — you will never know whether the extraction worked
- Attribute all performance change to the artifact without controlling for concurrent factors
- Wait for quarterly reviews to assess transfer — weekly leading indicators tell you if something is wrong before the lagging indicators confirm it