PM-201b · Module 2

Building Self-Critique Loops

4 min read

A self-critique loop instructs the model to produce an output and then evaluate that output against specified criteria before returning it. This is not asking the model to check its own work the way a human proofreads — it is asking the model to apply a second pass of reasoning that is explicitly structured as evaluation rather than generation. Done correctly, it catches format failures, scope overruns, logical inconsistencies, and missing required elements without requiring a second prompt.

Pattern 1: Single-prompt self-review
"Draft a risk assessment for the attached project plan.

After drafting, review your assessment for the following before returning:
- Does it address all three risk categories specified in the task?
- Are all claims supported by information in the project plan? (No extrapolation)
- Is the total word count within the 300-word limit?

If any check fails, revise before returning. Return only the final revised assessment."

Pattern 2: Explicit critique-then-revise
"Draft a sales email using the context below.

Then critique your draft against these criteria:
1. Does the subject line create specific curiosity (not generic)?
2. Is the opening line about the prospect, not about us?
3. Is there exactly one CTA?
4. Is the body 120 words or fewer?

For any criterion rated below 4/5, revise the draft to address the failure.
Return: [CRITIQUE] then [REVISED EMAIL]."

Pattern 3: Format verification loop
"Extract all action items from the meeting transcript below and return them as a JSON array.

After generating the array, verify:
- Every item has keys: owner, action, due_date
- due_date is formatted YYYY-MM-DD
- No items were missed from the transcript

If verification reveals errors, correct them. Return only the verified JSON."