PM-201b · Module 2
Calibration and Uncertainty
3 min read
Language models default to confident answers. This is a function of how they are trained — confident, fluent outputs are rewarded. In practical use, this default confidence is a liability. A model that does not know the answer and says so is useful. A model that does not know the answer and generates a confident, fluent, wrong answer is a liability. Prompts that surface uncertainty rather than mask it are a design requirement for any high-stakes use case.
Pattern 1: Explicit unknown handling
"Answer the following questions based on the provided source document only.
For any question you cannot answer from the document, respond with:
'UNKNOWN — not available in the source document.'
Do not attempt to answer from general knowledge."
Pattern 2: Confidence levels
"For each item in your analysis, indicate your confidence level:
- HIGH: fully supported by the provided data
- MEDIUM: partially supported; some inference required
- LOW: limited data; significant uncertainty
Flag any LOW-confidence items for human review before action."
Pattern 3: Explicit uncertainty acknowledgment
"If any aspect of the following analysis requires assumptions or extrapolation beyond the provided information, state those assumptions explicitly before your conclusion.
Format: [ASSUMPTION: description of assumption] before any conclusion that depends on it."
Pattern 4: I don't know
"If you do not have sufficient information to answer a question accurately, say 'I don't have enough information to answer this reliably.' Do not generate a plausible-sounding answer when the actual answer is unknown. I prefer no answer to a wrong answer."