DR-101 · Module 1
The Verification Habit
3 min read
AI models hallucinate. This is not a bug that will be patched — it is a structural feature of how language models work. They generate plausible text, not verified text. A model will confidently cite a study that does not exist, attribute a quote to someone who never said it, and invent statistics that sound perfectly reasonable. The verification habit — systematically checking key claims before acting on them — is not paranoia. It is basic research hygiene.
The "trust but verify" framework gives you a practical system. Tier 1 claims — decisions will be made based on this data — require independent verification from a primary source. Check the actual study, the actual filing, the actual dataset. Tier 2 claims — context and background information — require corroboration from one additional source. If two independent sources agree, the claim is likely solid. Tier 3 claims — general orientation and framing — can be accepted provisionally as long as they are not load-bearing in your analysis.
- Triangulate Key Claims For any claim that matters, find it from three independent sources. If three unrelated sources agree, confidence is high. If they disagree, you have found something interesting worth investigating.
- Ask the Model to Self-Check Prompt: "Which of the claims in your response are you least confident about? Which statistics might be approximate or outdated?" Models can sometimes flag their own uncertainty when asked directly.
- Check Named Sources If the AI cites a specific study, author, or dataset by name, verify it exists. Search for the exact title. This single step catches the majority of hallucinated citations.