KM-301i · Module 2
Before/After Impact Studies
3 min read
The strongest form of KM ROI evidence is a before/after impact study: a controlled comparison of performance on knowledge-dependent tasks before and after the knowledge system is deployed. Before/after studies are more compelling than ROI projections because they measure what actually happened, not what was predicted. They are more credible because they are based on organizational data rather than industry benchmarks. And they are more actionable because they identify specifically which use cases create the most value — informing where to invest next.
- Study Design Select three to five task types that are directly supported by the knowledge system and where performance is measurable: average ticket resolution time, deal win rate on specific objection types, time to close for knowledge-supported sales stages, compliance audit pass rate. Establish baseline performance data from the period before deployment. Collect the same data for a comparable period after deployment. Use the same practitioner population where possible.
- Controlling for Confounds Before/after studies are susceptible to confounds: the before period was a slow month, the after period included a training program, the product changed. Control for confounds by: using at least 90-day periods for each measurement window (reduces seasonal effects), identifying and documenting concurrent changes that might affect the metrics, and comparing the improvement rate to a control group that did not have access to the knowledge system if feasible.
- Presenting the Study Executive-ready before/after study format: one paragraph of context (what we measured, for what period, against what baseline), one data comparison table (before vs. after for each metric), one paragraph of business translation (what the delta means in dollars and risk), and one forward projection (if we replicate this improvement across the full use case set, the annual value is X). The format is evidence first, interpretation second, implication third.
Do This
- Establish baseline measurements before deployment — you cannot do a before/after study without a before
- Use task-specific performance metrics, not system usage metrics, as the dependent variable in before/after studies
- Document concurrent changes that might confound the results — academic credibility requires acknowledging the limitations
- Present before/after studies with business translation — the delta in percentage points needs to become dollars and risk for executive audiences
Avoid This
- Attempt a before/after study without a pre-deployment baseline — the post-deployment data alone cannot prove impact
- Select only the task types that showed improvement for the study — cherry-picked results destroy credibility
- Attribute all performance change to the knowledge system without controlling for confounds — overclaiming undermines the next study