AT-301a · Module 2

Orchestration Patterns

3 min read

The lead should gather all requirements upfront — don't dispatch agents on assumptions. Clarify scope, format, constraints, and priorities first.

A common failure mode: the lead immediately dispatches agents based on a vague request, then the results don't match what was needed. Instead, have the lead ask 5-10 targeted clarifying questions: What's the desired output format? What are the constraints? What quality matters most? Which reference materials should be used? This front-loaded investment saves multiple rounds of regeneration.

Not every task needs the most capable agent. Route simple work to lightweight agents and reserve heavy computation for complex reasoning.

You can specify the model parameter when spawning agents: haiku for quick, straightforward tasks; sonnet for moderate complexity; opus for deep reasoning. A codebase search doesn't need Opus. A simple file read doesn't need an agent at all. Match the agent's capability to the task's complexity. This reduces cost and latency — haiku responds faster and cheaper than opus for tasks where the extra reasoning isn't needed.

When multiple agents return results, the lead should compare, rank, and synthesize — not just concatenate.

If three agents each propose an implementation approach, the lead's job is to evaluate the trade-offs, rank the options, and present a recommendation. Raw concatenation ("Agent A said X, Agent B said Y") wastes the user's time. The lead should add judgment: "Agent A's approach is fastest but has a security trade-off. Agent B's approach is recommended because..." This is where the lead adds value beyond simple routing.