PM-301b · Module 2

Chain-Shot Prompting

5 min read

Chain-shot prompting shows the work inside the example — not just input and output, but the reasoning steps between them. It is few-shot combined with chain-of-thought: examples that demonstrate how to reason, not just what to conclude. Use it when the task requires non-obvious reasoning steps that the model cannot reliably reconstruct from input-output pairs alone.

# Standard few-shot (shows what, not how)
Input: A SaaS company with 500 employees wants to reduce onboarding time by 40%.
They currently use a 3-week manual process.
Output: Deploy an AI-assisted onboarding agent with structured data collection,
automated provisioning, and a self-serve knowledge base. Target: 1-week cycle.

# Chain-shot (shows reasoning)
Input: A SaaS company with 500 employees wants to reduce onboarding time by 40%.
They currently use a 3-week manual process.

Reasoning:
- Current state: 3 weeks, manual. 40% reduction = target under 2 weeks.
- Bottlenecks in manual onboarding: data collection, system provisioning, training.
- AI-addressable bottlenecks: data collection (structured intake), provisioning (automation),
  training (self-serve knowledge base).
- 500 employees = mid-market. ROI threshold for full automation is met at this scale.
- Risk: knowledge base quality. Must seed with existing documentation before launch.

Output: Deploy an AI-assisted onboarding agent handling structured intake, automated
provisioning, and self-serve training. Seed the knowledge base with existing SOPs before
launch. Target: under 2 weeks. Risk to monitor: knowledge base completeness.

Do This

  • Use chain-shot when the reasoning process is as important as the conclusion
  • Use chain-shot when the task requires multi-step analysis that IO pairs alone cannot demonstrate
  • Keep reasoning steps terse — demonstrate the reasoning pattern, not every micro-step
  • Ensure the reasoning in examples matches the reasoning you want in outputs

Avoid This

  • Use chain-shot when IO pairs are sufficient — it adds tokens for no gain
  • Include reasoning in examples but not ask for reasoning in the output prompt
  • Use reasoning steps that are themselves too complex to generalize from
  • Demonstrate reasoning that contradicts or ignores the stated constraints