PM-301g · Module 2
Retrieval Patterns
4 min read
Storage determines what you can store. Retrieval patterns determine whether the library is usable. A library that takes five minutes to find something in is not a library — it is a burden. Retrieval must be fast enough that searching the library is cheaper than writing a new prompt from scratch. That is the usability bar.
Three retrieval patterns cover the vast majority of real use cases. Tag-based retrieval: query the library by one or more tags. "Give me all active prompts tagged [sales, email]." Fast, predictable, requires consistent taxonomy. This is the primary retrieval method for most teams. Task-type retrieval: query by the task the prompt performs — summarization, extraction, classification, generation. Works when prompts are consistently tagged with task type, which they should be if your taxonomy includes a task dimension. Output-schema retrieval: query by the expected output format. "Give me all prompts that output JSON with a confidence field." Useful when downstream systems are strongly typed and require a specific output schema. Less common, but critical for integration-heavy environments.
# Tag-based retrieval
# Find all active prompts for sales email writing
GET /prompts?tags=sales,email&status=active
# Task-type retrieval
# Find all active summarization prompts for meeting notes
GET /prompts?task=summarization&domain=operations&status=active
# Output-schema retrieval
# Find prompts that output structured JSON with a specific field
GET /prompts?output_format=json&output_field=confidence_score&status=active
# Combined filter
# Find short-form, B2B, cold outreach prompts, sorted by test date
GET /prompts?tags=cold-outreach,b2b,short-form&status=active&sort=test_date:desc
# Similarity search (embedding-based, for advanced implementations)
# "I need a prompt that extracts key decisions from meeting transcripts"
POST /prompts/search
{ "query": "extract key decisions from meeting transcripts", "top_k": 5 }