CW-101 · Module 3

The Research Sprint

4 min read

Good news, everyone! We have arrived at the practical module, and I am going to start with the workflow that I recommend to absolutely everyone as their first Co-Work experience. Not because it is the most powerful workflow. Because it is the most intuitive.

The research sprint. Pick a mundane real-life task that involves research. Buying an appliance. Finding plane tickets. Evaluating software tools. Choosing a neighborhood to move to. Comparing health insurance plans. Something you would normally spend two hours googling, opening seventeen tabs, copying and pasting into a spreadsheet, and eventually giving up and making a gut decision because you ran out of patience.

Now instead of doing that, spin up four or five agents in Co-Work, each with a specific research angle. Have them execute in parallel. Have them come back with structured findings. Have them compile everything into a PDF guide or a decision matrix.

That one workflow — however mundane the topic — gives you a preview of what is possible and builds the muscle memory for more complex orchestrations. You learn how to decompose a question into parallel tasks, how to write focused agent mandates, how to synthesize multi-source findings, and how to produce a deliverable that persists as a file. Those are the foundational skills for everything else in Co-Work.

  1. 1. Define the Research Question Be specific about what you want to learn. Not "research laptops" but "compare the top 5 laptops under $1,500 for software development in 2026, evaluating: CPU benchmarks for compilation tasks, RAM and storage configurations, display quality for extended coding sessions, and developer community reviews on Reddit and HackerNews." Specificity is the difference between useful parallel agents and agents that produce vague, overlapping work.
  2. 2. Assign Research Angles Spin up agents with different focus areas: Agent 1 researches specs and benchmarks from manufacturer sites and review aggregators. Agent 2 finds real user reviews from forums, Reddit, and YouTube. Agent 3 checks current pricing, deals, and availability across retailers. Agent 4 analyzes developer-specific features — keyboard quality, port selection, Linux compatibility, thermal performance under sustained load. Each agent gets a focused mandate that does not overlap with the others.
  3. 3. Synthesize Results The lead agent compiles all findings into a structured comparison. Ask for a specific format: a markdown report with decision matrix, a ranked recommendation with pros and cons, or a side-by-side table. The format should match how you make decisions — if you are a spreadsheet person, ask for a table. If you are a narrative person, ask for a written analysis.
  4. 4. Create the Deliverable Ask Claude to compile findings into a PDF, markdown file, or PowerPoint stored in your working folder. Having the output as a file means it persists beyond the session. You can share it, reference it later, or use it as input for a follow-up workflow. The deliverable is the point — not the conversation.

Let me tell you why I am so insistent about starting with something mundane. It is the same reason pilots learn in Cessnas before they fly 747s. The stakes are low, the feedback loop is fast, and you can evaluate the output quality from your own domain knowledge.

If your first Co-Work experience is a high-stakes business analysis, you have two problems: you are learning the tool and evaluating the output simultaneously, and you do not have a baseline for what "good" looks like. But if your first experience is researching dishwashers, you already know something about dishwashers. You can tell immediately if the research is thorough, if the comparison is fair, if the recommendation is well-reasoned. You can evaluate the quality of the orchestration independently of the domain complexity.

After three or four research sprints on mundane topics, you will naturally see how to apply the pattern to business work. "Compare our product to the top 3 competitors across these 6 dimensions." "Research regulatory requirements for entering the EU market." "Evaluate 4 CRM platforms for our specific use case." Same pattern. Different stakes. Same muscle memory.