CDX-301g · Module 1
Scope Sizing for Agents
3 min read
Scope sizing is the art of making each task unit the right size for an agent. Too small, and the coordination overhead exceeds the task itself — you spend more tokens on hand-offs than on work. Too large, and the agent loses focus, exceeds its context window, or produces lower-quality output because it is juggling too many concerns simultaneously. The sweet spot is a task that takes 5-15 minutes of agent time and produces a coherent, testable artifact.
Three signals indicate a task is too large for a single agent: it touches more than 8-10 files, it requires reading more than 2000 lines of existing code for context, or it has more than 3 distinct subtasks that could be individually tested. Three signals indicate a task is too small: the hand-off document is longer than the task output, the setup instructions exceed the implementation, or the task can be expressed in a single sentence without losing precision.
# Scope sizing heuristics
TOO SMALL (merge into larger task):
✗ "Add a single import to config.ts"
✗ "Rename variable from x to y in one file"
✗ "Add a comment to the auth middleware"
→ Hand-off overhead > task value
RIGHT-SIZED (ideal agent unit):
✓ "Implement rate limiting middleware with config + tests"
✓ "Refactor the auth module from callbacks to async/await"
✓ "Write integration tests for the payment flow"
→ 5-15 min agent time, testable output
TOO LARGE (decompose further):
✗ "Rewrite the entire API layer"
✗ "Add authentication, authorization, and audit logging"
✗ "Migrate from REST to GraphQL"
→ Multiple distinct deliverables, context overflow risk
Do This
- Target 5-15 minutes of agent time per task unit
- Ensure each unit produces a testable, self-contained artifact
- Use the "single PR" heuristic — would this be a reviewable pull request?
- Batch trivially small tasks together to amortize coordination overhead
Avoid This
- Split a 10-file change into 10 single-file tasks — the coordination cost will exceed the parallelism gain
- Give one agent a 50-file refactoring — context overflow will degrade quality
- Optimize for maximum parallelism over right-sized units — five good agents beat twenty micro-agents
- Ignore context window limits — agents that exhaust their context produce degraded output