KM-301e · Module 2

Documenting AI-Assisted Processes

3 min read

Documenting processes that involve AI adds a layer of complexity that conventional process documentation formats do not handle well. AI outputs are probabilistic, not deterministic — the same input does not always produce the same output. The process documentation must account for output variation, quality thresholds, human review triggers, and the handling of low-confidence or unexpected outputs. This is not optional complexity. It is the nature of AI-assisted processes.

  1. Document the Output Quality Gate Every AI-assisted process needs a documented quality gate: the criteria that determine whether the AI output is accepted, sent to human review, or rejected and regenerated. "If the confidence score is below 0.85, route to human review." "If the output contains [flagged pattern], escalate to supervisor." The quality gate is the most important decision point in any AI-assisted process and the one most often left undocumented.
  2. Document the Human-in-the-Loop Triggers Define exactly when and how a human enters the process — not as an exception, but as a designed component. The trigger criteria, the human's role at that point, the information they receive, and the path back to automated processing after human review. AI-assisted processes that do not document human-in-the-loop points create ad hoc escalation patterns that vary by practitioner and produce inconsistent outcomes.
  3. Document the Failure Modes of the AI Component What does the AI do when it should not be confident? What does it do when the input is outside its training distribution? Document the known failure modes of the specific AI component in the process — hallucination patterns, edge case failures, input format sensitivity — and the handling procedure for each. This section of an AI process runbook is the one that will be used most often.
# AI-Assisted Process Runbook: [Process Name]

## Overview
- Process owner: [Name/Role]
- AI component: [Model/System name and version]
- Last validated: [Date]

## Process Steps
1. [Step 1 — input preparation]
2. [Step 2 — AI invocation]
3. **Quality Gate**: Accept if [criterion]. Route to human review if [criterion].
4. [Step 4 — post-AI handling]

## Quality Gate Criteria
| Condition | Action | Owner |
|-----------|--------|-------|
| Confidence score ≥ 0.90 | Auto-accept, proceed to step 4 | System |
| Confidence score 0.75–0.89 | Route to human review queue | Reviewer |
| Confidence score < 0.75 | Reject, regenerate with [fallback prompt] | System |
| [Flagged pattern detected] | Escalate to supervisor | Escalation queue |

## Human Review Procedure
- Reviewer receives: [specific outputs + context]
- Reviewer decides: [approve / modify / reject]
- Return path: [how the process resumes after review]
- Target review time: [SLA]

## Known AI Failure Modes
| Failure Mode | Detection Signal | Handling Procedure |
|-------------|-----------------|-------------------|
| [Failure mode 1] | [Signal] | [Procedure] |
| [Failure mode 2] | [Signal] | [Procedure] |

## Escalation Path
If the process cannot be completed with standard handling:
→ [Escalation contact] via [channel] with [required context]