PM-301g · Module 1
The Library as Infrastructure
4 min read
Every organization that uses AI seriously eventually builds a "prompt folder." Someone creates a shared drive, drops in some .txt files, names them things like "summarize_v2_FINAL_USE_THIS_ONE.txt," and calls it a library. It is not a library. It is a collection. A library is indexed, versioned, searchable, and maintained. A collection is a pile with good intentions.
The difference matters at scale. A collection of 20 prompts is navigable. A collection of 200 is not. At 200, you start rewriting prompts you already have because you cannot find them. At 500, you have contradictory prompts in production for the same task because no one knew a better one existed. The rewrite tax compounds. The inconsistency tax compounds. Both are invisible until they are not.
A prompt library is infrastructure in the same category as a code repository or a configuration management system. It has schema (what data is stored about each prompt), index (how prompts are found), versioning (what changed and when), and governance (who owns each prompt and who can change it). None of these are optional at production scale. All of them require deliberate design before the library grows to the point where they are hard to retrofit.
Do This
- Treat the prompt library as infrastructure — designed before it is needed
- Define schema, indexing, and governance before adding the first prompt
- Enforce metadata requirements on all entries from day one
- Search before writing — the library is only valuable if you use it to find what exists
Avoid This
- Create a shared folder and call it a library
- Add metadata "later, when we have time" — you will not have time
- Let prompts accumulate without ownership or version tracking
- Treat rewriting existing prompts as acceptable — it is a failure mode, not a workflow