PM-101 · Module 1

How Models Read Prompts

3 min read

Understanding what a language model actually does with your prompt removes the mystery from prompt engineering. The model reads the entire prompt as a single context window, weighing every part of it. Certain structural elements carry more predictive weight than others: what role it has been given, what task it has been assigned, what context surrounds the task, and what output format has been requested. Each of these is a signal. Strong signals produce focused outputs. Weak or absent signals produce guesswork.

Here is what the model weighs when it reads your prompt. Role: who is it being asked to be? This activates domain knowledge and sets the register of the response. Task: what specific action is being requested? The clearer the verb, the clearer the output. Context: what background information changes how the task should be executed? Without context, the model applies defaults. Format: what does the output look like? Headers, bullets, JSON, prose — if it is not specified, the model picks something. These four elements are the anatomy of every effective prompt.

  1. 1. Role activates behavior Setting the role — "You are a senior contracts attorney" — shifts the model's response register, vocabulary, and reasoning frame. It is not decoration. It is a behavioral instruction.
  2. 2. Task defines the action The task statement tells the model what to do. "Review," "summarize," "draft," "extract," "compare" — each verb implies a different type of output. Use the most specific verb that matches your need.
  3. 3. Context changes the output Context is what makes the task specific to your situation. Without context, the model answers the generic version of your question. With context, it answers the specific one. Only include context that changes behavior.
  4. 4. Format determines deliverability If the output will be pasted into a spreadsheet, say so. If it needs to be JSON, specify the schema. If it should be three bullet points, state that. Format specification is the most commonly skipped step and the most common cause of revision cycles.