AT-201a · Module 3
The "Good Enough" Stopping Condition
3 min read
Iteration improves quality. But not linearly. The first revision catches the major issues — missing sections, factual errors, structural problems. The second revision catches the medium issues — awkward transitions, inconsistent terminology, missing citations. The third revision catches the minor issues — word choice, rhythm, formatting details. The fourth revision catches almost nothing new because the return on each additional pass has diminished to near zero.
This is the law of diminishing revisions, and ignoring it is the most expensive mistake in iterative agent workflows. I have watched teams burn through 5, 6, 7 revision cycles because the critic kept finding something to improve. And it always will find something to improve. Perfect does not exist. A critic with a 10-point scale and six dimensions will always find a dimension that could score one point higher. The question is whether that one point of improvement is worth the token cost and time of another full cycle.
The stopping condition must be defined before the first iteration begins. Not after. Before. "Stop when all dimensions score 7 or above." "Stop after 3 revision rounds regardless of scores." "Stop when the improvement between rounds is less than 1 point on any dimension." These are concrete, measurable stopping conditions that prevent runaway iteration.
My recommendation: combine a quality threshold with a round limit. "Stop when all dimensions score 7+ OR after 3 rounds, whichever comes first." If the output hits 7+ in two rounds, you save a round. If it does not hit 7+ after three rounds, the problem is not solvable through iteration — it requires restructuring the prompt, the roles, or the pipeline. Continuing to iterate on a fundamentally flawed foundation is not quality assurance. It is polishing a broken architecture.
Do This
- Set stopping conditions before iteration begins — threshold + round limit
- Track score improvement between rounds — if improvement is less than 1 point, stop
- Three rounds maximum — diminishing returns make additional rounds cost-ineffective
- Treat post-round-three failures as design problems, not quality problems
Avoid This
- Iterate until the critic returns a perfect score — perfect scores do not exist
- Let the critic decide when to stop — it will always find something to improve
- Increase the round limit because "we are almost there" — that is the sunk cost fallacy
- Skip the stopping condition and "see how it goes" — open-ended loops drain budgets