PM-101 · Module 1
The Specification Problem
3 min read
Most people approach AI models the way they approach a search engine — type something vague, see what comes back, refine from there. That workflow is fine for finding a restaurant. It is not fine for producing a contract summary, a sales email, a technical analysis, or anything else where the output has to meet a defined standard. When your output has to be right, the prompt has to be precise. The model cannot infer what you meant. It can only respond to what you wrote.
Language models are pattern-completion systems. They do not think about what you intended. They process what you provided and produce the most statistically probable continuation based on their training. When your prompt is vague, the model fills in the gaps with patterns from its training data — which may or may not match what you actually needed. The output surprises you not because the model is unpredictable, but because your specification was incomplete. The third revision happens because the first prompt was a wish, not an instruction.
Do This
- Write prompts that define role, task, context, format, and constraints explicitly
- Treat every prompt as a contract clause — every word earns its place
- Specify the output format before asking for the output
- State what you do NOT want as clearly as what you do want
Avoid This
- Do not type a vague description and iterate blindly from bad outputs
- Do not assume the model knows your intent from context it was not given
- Do not leave format to interpretation — specify it or accept randomness
- Do not confuse quantity of words with quality of specification
If you did not write it down, you did not ask for it.
— FORGE