CW-101 · Module 2
Skills That Stick
5 min read
Skills are the most important concept in this entire ecosystem. Not Co-Work. Not Claude Code. Not plugins. Skills. Because skills are what persist. A conversation ends. A session closes. A project wraps up. Skills endure.
Think of a skill as a hyper-detailed SOP — your standard operating procedure for some task you do regularly. But unlike an SOP binder collecting dust on a shelf, skills are invoked just-in-time by Claude when the context calls for it. You do not have to remember to use them. You do not have to search for them. You do not have to say "apply the research skill." Claude reads your prompt, recognizes the trigger words, sees a match against the skill library, and loads the relevant skill automatically.
That is a fundamentally different relationship with documentation. SOPs require discipline to follow. Skills require nothing — they fire when the context is right.
The real skill design insight — and this is the part that separates amateurs from professionals in this space — is to separate skills by task, not by project.
You could theoretically create a "client proposal" mega-skill that does research, writes the pitch, formats the document, and runs QA. It would work. Sometimes. And then it would break in unpredictable ways because a single skill trying to do five things has too many failure modes. Which step broke? Where do you debug? How do you improve one step without risking the others?
Instead: one skill for research. One skill for writing. One skill for formatting. One skill for QA. Four skills, each focused, each testable, each improvable in isolation. Chain them in a workflow: research skill produces the data, writing skill consumes it and produces the draft, formatting skill makes it presentable, QA skill catches the errors. Each link in the chain does one thing well.
Do This
- One skill per task — research, writing, QA should be separate skills
- Use trigger words that are specific: "podcast research" not "research stuff"
- Chain skills in workflows: research -> writing -> QA
- Test in fresh sessions — existing context can accidentally trigger skills
Avoid This
- Create mega-skills that do 5 things — they break in unpredictable ways
- Use vague trigger words that match everything — "help with projects"
- Skip testing — a skill that fires incorrectly is worse than no skill at all
- Forget to iterate — skills should evolve as your workflow evolves
Let me tell you why I keep circling back to this idea of iteration. A skill you build today is Version 1. It captures your current understanding of the task. But your understanding evolves. You discover edge cases you did not anticipate. You find better approaches. You get corrected by stakeholders who wanted something different from what you assumed.
Every correction is a skill improvement opportunity. "The research skill missed competitor pricing — add a step to check G2 and Capterra for pricing data." "The QA skill did not catch the footer date being wrong — add a date accuracy check." Each improvement makes the skill more robust. Over months, a skill that started as a rough approximation becomes a refined, battle-tested procedure that encodes everything you have learned about that task.
This is compound interest for your AI workflow. Each correction compounds. Each improvement persists. The skill you use in month six is dramatically better than the skill you built in month one. And unlike human memory, the skill does not forget. It does not have a bad day. It does not skip steps because it is rushing.