AT-201a · Module 3
Review Gate Patterns
3 min read
A review gate is a checkpoint that output must pass before advancing to the next stage. It is the production line's quality control station. Every deliverable stops at the gate, gets inspected against defined criteria, and either passes through or gets sent back for rework. Without gates, the pipeline is a conveyor belt with no quality control — whatever the first agent produces flows straight to the final deliverable.
Gates come in three varieties. Automated gates use scoring thresholds: "All dimension scores must be 7 or above to pass." These run without human involvement and handle the majority of quality checks. Conditional gates apply different thresholds based on context: "Internal documents pass at 6+. Client-facing documents pass at 8+. Board presentations pass at 9+." The stakes determine the bar. Human-in-the-loop gates require explicit human approval before proceeding: "Legal review of contract language requires human sign-off regardless of automated scores." Some decisions should not be delegated to agents.
Gate placement is a design decision. Too few gates and errors propagate. Too many gates and the pipeline stalls. The principle I follow: place a gate after every stage where the output format changes. Research produces structured data — gate. Drafting produces prose from that data — gate. Review produces feedback — no gate needed, feedback goes directly to the revision stage. Formatting produces the final deliverable — gate.
Format changes are natural quality checkpoints because they are the points where information can be lost or distorted. When structured data becomes prose, did the key findings survive the transformation? When a draft becomes a formatted PDF, did the formatting preserve the content hierarchy? These are the junctions where quality review has the highest impact.
Do This
- Place gates at format transition points: data to prose, prose to formatted deliverable
- Use different thresholds for different audiences: internal vs. client-facing vs. board-level
- Require human approval for legal, financial, and regulatory content regardless of scores
- Log gate results for each deliverable — the quality record is valuable over time
Avoid This
- Skip gates because "the agent is good enough" — first drafts always have issues
- Use the same threshold for everything — a Slack message and a board deck have different bars
- Gate every micro-step — too many gates stall the pipeline and waste tokens on trivial checks
- Let automated gates make final decisions on high-stakes content — some gates need humans