CW-201a · Module 1
Multi-Step Research → Draft → Review
4 min read
Good news, everyone! We are about to build something that actually ships. Not a demo. Not a proof of concept. A three-stage pipeline that takes raw research, turns it into a polished draft, and runs it through quality assurance before you ever show it to a stakeholder.
The pipeline has three stages and they are non-negotiable. Stage one: research. You spin up parallel agents to gather information from multiple sources. They search the web, analyze documents, pull data, and return structured findings. Stage two: drafting. A dedicated writing agent consumes the research output and produces a first draft in the format you specified — report, memo, slide deck, executive summary, whatever the deliverable requires. Stage three: review. A QA agent evaluates the draft against specific criteria and either approves it or sends it back with actionable feedback.
This is not optional architecture. This is minimum viable workflow for anything that leaves your desk. If you skip stage three, you are shipping first drafts. If you skip stage one, your drafts are built on assumptions instead of evidence. The pipeline exists because each stage catches what the previous stage cannot see.
Here is where most people go wrong: they try to do all three stages in a single prompt. "Research competitor X and write a report and make sure it is accurate." That is three jobs crammed into one context window, and the quality of each job degrades because they are competing for attention. The research is shallow because the agent is already thinking about how to write the report. The writing is rushed because the agent is already thinking about accuracy. The accuracy check is superficial because the context window is full of research notes and draft prose.
The pipeline separates concerns. The research agent does not know a report will be written. It just gathers information. The drafting agent does not know a QA agent is coming. It just writes the best draft it can. The QA agent does not know how the research was gathered. It just evaluates what is in front of it. Each agent operates with full focus on its single responsibility.
- 1. Research Stage Spin up 2-4 parallel research agents, each with a specific angle. One for primary sources, one for competitor data, one for market context. Have each agent save findings to a structured file — not a conversation summary, a file. This survives context compaction.
- 2. Draft Stage Queue the drafting prompt before research finishes. Point the drafting agent at the research files by name. Specify the deliverable format, length, tone, and audience. The more constraints you give, the better the first draft.
- 3. Review Stage The QA agent evaluates the draft on 4-6 specific dimensions: factual accuracy against the research files, logical flow, completeness, formatting, and audience appropriateness. If any dimension scores below threshold, it returns specific revision instructions to the drafter.
- 4. Iteration or Ship One to two revision cycles is typical. If the draft still fails after three rounds, the problem is in the research or the prompt, not in the iteration count. Restructure rather than iterate further.