DR-301b · Module 1

Anatomy of a Research-Grade Prompt

4 min read

A research-grade prompt is an engineered artifact with seven structural components, each serving a specific function. Role assignment tells the model what expertise to activate. Context window provides the background information the model needs to reason correctly. Task specification defines what output is expected. Constraint set establishes boundaries — what to include, what to exclude, what confidence thresholds to enforce. Format directive specifies the exact output structure. Quality criteria define how the output will be evaluated. Meta-instruction tells the model how to handle uncertainty, gaps, and edge cases.

Most research prompts fail at component five or six. The researcher specifies the task clearly but leaves the format and quality criteria implicit. The model fills the gaps with defaults — which are optimized for general helpfulness, not intelligence production. A prompt that says "analyze this market" will produce a different output every time. A prompt that says "analyze this market as a ranked threat assessment, scoring each competitor 1-5 on three dimensions, citing sources for every quantitative claim, flagging any assessment below medium confidence" will produce consistent, evaluable output.

## Research Prompt: Seven Components

ROLE: You are a competitive intelligence analyst with
expertise in B2B SaaS markets and 10 years of experience
producing executive-level threat assessments.

CONTEXT: [Company X] is a $50M ARR vertical SaaS platform
in construction management. Their Q4 earnings showed
12% YoY growth, down from 18% in Q3.

TASK: Produce a competitive threat assessment for the
top 5 direct competitors.

CONSTRAINTS:
- Public data only (filings, press, job postings)
- Exclude horizontal project management tools
- North American market only
- Companies must have $10M+ estimated ARR

FORMAT: Ranked table with columns: Company, Est. ARR,
Threat Score (1-5), Primary Threat Vector, Evidence
(2-3 bullets per competitor), Confidence Level.

QUALITY: Every threat score must cite at least one
observable signal. Flag any assessment where confidence
is below "medium" and explain the data gap.

META: If data is insufficient to score a competitor,
say so explicitly rather than guessing. Distinguish
between "assessed with high confidence" and "estimated
based on limited data."