PM-301b · Module 1
Why Examples Work
4 min read
In-context learning is the mechanism by which examples shape model behavior without updating model weights. When you provide input-output pairs in the prompt, the model uses them to infer the task specification, the output format, the reasoning style, and the appropriate level of detail — then applies that specification to the new input.
The practical implication: examples can specify things that are hard to describe in natural language. "Respond in the style of a pithy one-line insight" is vague. A single well-chosen example that shows exactly what a pithy one-line insight looks like is precise. The model can match format, tone, length, and register from a demonstration in ways that explicit instructions frequently fail to produce.
When do examples beat instructions? Format-heavy tasks. Style-dependent tasks. Tasks where the output characteristics are easier to demonstrate than to describe. When do instructions beat examples? Tasks with many variants where no small example set is representative. Tasks with strict logical rules that examples illustrate inconsistently. Tasks where the instruction is shorter and clearer than any set of examples could be.
Do This
- Use examples when the output format is complex or hard to describe precisely
- Use examples for style-dependent tasks where "write like X" needs a concrete reference
- Use examples when the task has a non-obvious reasoning pattern
- Combine examples with instructions when both provide complementary constraints
Avoid This
- Use examples as the sole specification when the task has many variants your examples don't cover
- Assume examples are always superior to instructions — measure both
- Use examples that are longer than the intended output to demonstrate brevity
- Use examples that differ from the real task distribution in ways that matter